What would be some biologically plausible sensations?

118 views
Skip to first unread message

Clément Michaud

unread,
Nov 29, 2024, 12:52:48 PM11/29/24
to open-nars
Hey,

I am trying to use the NAL theory to solve some problems and I am wondering what would be the form of the sensation to provide to the system without leaking too much prior knowledge. By sensations, I mean the external stimuli that enter the system and that are represented as judgements. The question that has run in my mind for a couple of days/weeks now is: what could be a biologically plausible sensation? In many examples in the book, the judgements that are used embed a lot of meaning. For instance (bird -> animal) embed some information on two grounded concepts but I do not think this kind of information (those such high-level concepts and their relation) would plausibly be accessible to a real brain.

Do you have any thoughts on what would be the raw sensations used by biological brains? After some thoughts, I hypothesized the fact that the only information that the brain might possess is the current value of the sensor and the identifier of the sensor. In that case, a NAL system having multiple sensors would only get sensations of the form T --> SensorX where T is a grounded token representing the input value applied on the sensor and X is the id of the sensor. Perhaps, some sensors might also relate to each other architecturally, like  ( (Sensor1 x Sensor2) --> LeftOf ) but apart from that, I have a hard time imagining anything else without leaking prior knowledge that we want the system to capture instead.

What else could it be in your opinion? What are your thoughts?

Regards,
Clément

Pei Wang

unread,
Nov 29, 2024, 5:02:42 PM11/29/24
to open...@googlegroups.com
Hi Clément,

This is exactly what the new tech report attempts to address. As mentioned in it, the stream of input may consists of (1) Narsese tasks (which is the only form in the previous versions), (2) sensation intensity (numbers), or (3) identifiers for recognized entities. The last two forms will be converted or perceived in the channel to become the first one for the following inference. 

With multiple sensors, the temporal-spatial structures of the inputs will also be recognized and processed, as you suggested. The tech report only directly discussed the processing of temporal structure, but the same ideas will also be applied to spatial structure, as in your  ( (Sensor1 x Sensor2) --> LeftOf )  example, as suggested in https://cis.temple.edu/~pwang/Publication/perception.pdf

I agree that at the biological level all raw sensations are nothing but stimulus (with quantitative and qualitative differences) with temporal-spatial relationships. As written in a footnote of the tech report,  "In principle, all types of Narsese statements, with various copulas and connectors, can be generated from the temporal-spatial relations innate in the sensorimotor experience. However, it does not mean that the meaning of a statement can always be reduced to (or grounded in) sensorimotor experience, nor that the grammar rules of Narsese and the inference rules of NAL can be learned from such experience."

Regards,

Pei

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/open-nars/91606467-9210-480b-a62e-c475c540c52cn%40googlegroups.com.

Christian Hahm

unread,
Nov 29, 2024, 11:26:46 PM11/29/24
to open-nars
Dear Clement,


Additionally, see our vision experiment from 2022 using raw pixel data with NARS to classify images of handwritten digits. In that case, the subject of a statement was the photosensor instance (e.g., "{pixel_1_5}) and the predicate was the "sensation type" (e.g.,  [bright]). In this way, NARS would construct compounds using pixel events like "<{pixel_1_5} --> [bright]>", and could predict the digit in the image with some moderate accuracy.



Regards,
Christian

Keith

unread,
Nov 30, 2024, 12:14:18 AM11/30/24
to open...@googlegroups.com
Hello, 
 I have been receiving emails on this thread for decades without having thoughts to share, although fascinated.
I thank you all. 
 It's dawned on me that we'll have advanced digital reasoning knowledge any minute. Aka AGI. Inevitably ASI follows suit rather quickly indeed.
 With this realization, and with intense observations of the incredible pursuit of this technology, I can see many approaches. Some, like Attentive transformers, having received immense funding horizontally. Other approaches not. But this should, and I do expect will, be reviewed by our coming AGI's. Meaning that NARS is likely to be a valid logic of path that has been overran by vested progressives. 
 I do believe we're currently living in a period where more backtracking will happen and prove more fruitful than any other period in humanity. I am thankful for this. Maybe we'll get above control, and greed. And move into universal betterment for all of life. Starting with humanity out of our necessities.
 Berick Cook saw this inside of NARS.
Thank you Patrick Hammer and every associate, and thinker involved.
  From the outside looking in. It appears to me that along the way, late 80's to very early 90's, Hammer imagined a logic. Goertzel began an architectural framework for AGI.  Berick kept envisioning "the algorithmic way" that ~8 yrs later (thankfully he's incredibly persistent!) that has lead to AIRIS. While He and Goertzel, and many others came together to build OpenCog Hyperon.  Finally, a valid path to true AGI.
Seemingly not just a stair step.
 Once AGI is truly underway I imagine, and quite frankly expect, for "It"( or collectively "Them") to go back and accumulate then study every approach at reasoning algorithms. Then we'll be inside of the very best possibilities. At least at that point in time. 
 We're living in the most interesting of periods, in my opinion.

Thank you all.

Keith Beaudoin 





Sent from my iPhone

On Nov 29, 2024, at 8:26 PM, Christian Hahm <tun5...@temple.edu> wrote:



Shubhamkar Ayare

unread,
Nov 30, 2024, 3:05:14 AM11/30/24
to open-nars
There have indeed been claims that all of intelligence can be reduced to transducers. See Pylyshyn's works for elaboration, but I'll quote a relevant discussion from [1]:

It has often been assumed (and at one time it was argued explicitly by Fodor 1980a) that an account of cognitive processes begins and ends with representations. The only exception to this, it was assumed by many (including, implicitly, Pylyshyn 1984), occurs in what are called transducers (or, in the biological literature, ‘‘sensors’’), whose job is to convert patterns of physical energy into states of the brain that constitute the encodings of the incoming information. [...] Given the view that the bridge from world to mind resides in transduction, the problem then becomes to account for how transduced properties become representations, or semantically evaluable states and, in particular, how they come to have the particular representational content that they have; how, for example, when confronted with a red fire engine, the transducers of the visual system generate a state that corresponds to the percept of a red fire engine and not a green bus.

[...]

At one time it was seriously contemplated that this was because we had a ‘‘red-fire-engine transducer’’ that caused the ‘‘red-fire-engine cell’’ to fire, which explained why that cell corresponded to the content red-fire-engine. This clearly will not work for many reasons, one of which is that once you have the capacity for detecting red, green, pink, etc., and fire-engines, buses, etc., you have the capacity to detect an unbounded number of things, including green fire-engines, pink buses, etc. In other words, if you are not careful you will find yourself having to posit an unlimited number of transducer types, because without some constraints transduction becomes productive. Yet even with serious constraints on transduction (such as proposed in Pylyshyn 1984, chap. 9) the problem of content remains. How do we know that the fire-engine transducer is not actually responding to wheels or trucks or engines or ladders, any of which would do the job for any finite set of fire engines? This problem is tied up with the productivity and systematicity of perception and representation. Failure to recognize this is responsible for many dead-end approaches to psychological theorizing (Fodor and Pylyshyn 1981; Fodor and Pylyshyn 1988).

As such, it becomes difficult to compare the "intelligence" of different systems using different transducers. Kristinn Thorisson and colleagues were working on a Task Theory for AGI, but I'm unaware about its current state.

I find the principles of NARS fascinating both because of the multiple ways it deviates from standard logic and how it can be universally applied as the work by Christian Hahm and colleagues on Visual Perception shows. There's also work on Speech Processing using NARS[2].

On the other hand, I find myself taking the position that even though we can learn everything using a single framework doesn't necessarily mean that we should, for reasons of compute efficiency. Particularly, if the goal is to develop human-like intelligent systems for the known environments we grow up and live in, then I'm inclined to look at development (ontogeny) and evolution (phylogeny) to see what humans are endowed with. There has been plenty of work on this in recent decades[3,4]. 

I just had a look at the Table of Contents of [3], and unfortunately, it seems that even this line of work is missing an Emotion/Drive Theory. I'm finding it more plausible to explain behavior as resulting towards the fulfilment of certain drives - and that's what any self-maintaining system would need to do - rather than particular goals and criteria set by the designer. Again, you can learn them from scratch, but we had evolution work it out for us over millions of years across trillions of individuals in each generation. Plus, an intelligent system would also need an understanding of emotions because it should also be able to learn from other humans which have emotions.

[3] also seems to assume that we are philosophical zombies[5], but even ignoring phenomenal consciousness and focusing on access consciousness, it seems it has nothing significant to say about its role in learning. About consciousness, I'm particularly attracted to Global Workspace Theory[6], but I am in no position to evaluate it critically; but there have been cognitive architectures based on GWT and its neural equivalents.

This has become a rant. To tie it back to NARS, one of the things I'd like to try some day is to come up with a high-dimensional / neural-network equivalent of NARS. That, or figuring out the appropriate interface between modern neural networks and NARS. NARS would be involved in cognition, while neural networks such as Segment-Anything or Template-based Object Detection[7-8] in perception; but may be even the neat integration itself requires a neural equivalent of NARS. I am unaware if anyone is already working on a neural equivalent of NARS.

References:
  1. Pylyshyn, Zenon W. (2007). "Things and Places: How the Mind Connects with the World". MIT Press. 
  2. https://www.applied-nars.com/articles/speech-recognition-using-nars
  3. Spelke, E. (2022). "What Babies Know: Core Knowledge and Composition Volume 1". Oxford University Press. https://books.google.at/books?id=E6B1EAAAQBAJ.
  4. Spelke, Elizabeth S. (2023). "Précis of What Babies Know". Behavioral and Brain Sciences, 47  http://dx.doi.org/10.1017/S0140525X23002443.
  5. https://en.wikipedia.org/wiki/Philosophical_zombie
  6. https://en.wikipedia.org/wiki/Global_workspace_theory
  7. https://arxiv.org/abs/1911.11822
  8. Frink, Travis (2022). "Enhanced Deep Template-Based Object Instance Detection". mathesis, University of Rhode Island.

Clément Michaud

unread,
Dec 2, 2024, 4:51:02 AM12/2/24
to open-nars
Thank you sharing the publication Christian. I indeed saw the recording of that experiment, it was cool.

The approach seems to fit with the idea of perception I described in my first message. However, for the first layers of the visual sensors, I am wondering whether "logic inferences" could apply efficiently. In my opinion reasoning would apply at a higher level of abstraction while lower levels would be closer to ANN approximating concepts. Using matrices as NAL concepts as described in the publication shared by Pei is closer to that though.

Clément

Clément Michaud

unread,
Dec 2, 2024, 5:11:19 AM12/2/24
to open-nars
Thank you for sharing your opinion and the publication Pei. I'm glad to see that my view on perception converges with yours.

However, one thing that still bears questions in your answer imo is: what could plausibly be the seed tasks/goals that the system begins with. If the task has a too big syntactic complexity, we somehow give up some prior information as well. I know that crafting those tasks is a shortcut to avoid some learning cycles but I really wonder how this could be an obstacle to AGI. My approach is to focus on agents with low reasoning capacity and then build on top of that. What if the system starts in nature with no prior knowledge except what nature gives it as feedback? My way of working right now is to limit the goal to a biologically plausible one: the agent tries to optimize a life gauge that gets increased or decreased according to the expectations of the environment/problem the agent lives in. If it succeeds in acting in this world as the world expects, its life gauge increases, otherwise it decreases little by little. If the gauge goes to zero, I would restart a new simulation with the acquired knowledge from the previous simulation as some kind of genetic learning. The other questions/tasks could come from emotions but I somehow see that as some part of the life gauge, except the agent would only indirectly die from being too happy or too sad for example.

Do you have any opinion on whever we should include other biologically plausible questions that I have not thought about?

Clément

David Ireland

unread,
Dec 2, 2024, 4:22:12 PM12/2/24
to open...@googlegroups.com
Hi Clément,

As we have similar interests I'd thought I'd chime in. The human brain receives about 11 million pieces of information a second. Many of these are from the skin (our largest organ e.g. touch, temperature ...) but also sound and vision etc.

The brain does a really good job of prioritising these (except with people who have certain neurological conditions.) But it's important to remember NARS isn't trying to recreate a human brain.

I've attached two papers I wrote for the AGI conference -  both using NARS and looking at possible primordial desires for ethics and language. I think self-preservation is a required  "uber" goal for any species. 

 I also recall a talk on "semantic primes" at the AGI 2024 conference that might interest you as it's on the same topic. 

Hope that helps.

Regards,
David




mirabile-dictu-language-acquisition-non-axiomatic-reasoning-system.pdf
camera-ready-primum-non-nocere.pdf

Pei Wang

unread,
Dec 3, 2024, 9:05:47 AM12/3/24
to open...@googlegroups.com
Hi Clément,

I don't have a detailed answer yet, but in general I don't consider the initial tasks (including goals) as "seeds", as people tend to use that word to suggest that all other tasks grow out of them. Instead, NARS accepts and derives new tasks all the time, which are not necessarily implied by, or even consistent with, the initial tasks.

That being said, the initial tasks indeed play a crucial role in shaping the system's beliefs and desires. I tend to compare the process of educating a human child, in which the teaching materials matter, as well as their order and timing. As you wrote, to start with complicated materials is not a good idea. I don't reject the idea of using innate/implant knowledge, especially for practical applications, though these materials should still be reviseable, unless the designers know for sure that certain knowledge won't need to be re-considered in all situations.

I can see the theoretical and practical value of focusing on biologically plausible tasks, and just don't want to use them to restrict all intelligent systems. I fully agree that "intelligence" can be combined with "evolution", as descussed in https://cis.temple.edu/~pwang/Publication/roadmap.pdf, Section 3, which also mentions "education".

Regards,

Pei

Clément Michaud

unread,
Dec 3, 2024, 12:03:10 PM12/3/24
to open-nars
Hi Pei,

I see your point. I'm focusing on biologically plausible tasks as a way to find the "minimal" requirements for a system to learn, not as a restriction or because I'm a biologist, because I'm not, I am software engineer and I'm more interested in solving very simple puzzles than to make an emotionally intelligent agent :). Right now I try to make an agent understand the concept of a simple serie: u(n+1)=u(n)+1 and u(0)=0. I want it to abstract the concept of natural numbers so that it can use it in more advanced problems (counting, additions, multiplications).

I would not reject the idea of implant either if I felt it was feasible but I doubt it is and that is also the reason of my question on tasks/goals. I doubt it because as the agent lives, it builds bigger and bigger concepts that are huge/complex/unreadable combinations and recombinations of raw sensations. The concept of "bird" might be represented by a concept that is built relative to multiple thousands other concepts (or even more) that are grounded to the agent sensations. So even if we wanted to implant anything (question or judgement), my question would be how would the agent represent the question itself to "understand" it? I like drawing a parallel with communication between humans: each person has a different representation of a shared idea (a bird for instance), they can share/communicate the concept with spoken words/drawings but the internal representation in each of the two brains is likely very different (because each person has its own experience and has its own hardware). For writting any high-level task or judgement we would need to know ahead of time the representation of the concepts we want the agent to act on. What if it does not even have it yet? If I draw a parallel with biological agents, it would be like putting a probe in the brain of the agent to try figure out the components of the goal we want to implant and build the goal out of it but then the goal would be crafted for this very specific agent and no one else could "understand" it...

Where I follow you though is that the education path taken by the agent seems very important, as well as, imitation learning imo.

Regards,
Clément

Clément Michaud

unread,
Dec 3, 2024, 12:05:53 PM12/3/24
to open-nars
Thanks for sharing David, I will read them.

I do agree with the "uber" goal of self-preservation. You totally got the point of my question which, to summarize, was: Can we imagine any other "uber" goal than self-preservation?

Regards,
Clément

Pei Wang

unread,
Dec 4, 2024, 6:27:26 PM12/4/24
to open...@googlegroups.com
Hi Clément,

On Tue, Dec 3, 2024 at 12:03 PM Clément Michaud <clement....@gmail.com> wrote:
Hi Pei,

I see your point. I'm focusing on biologically plausible tasks as a way to find the "minimal" requirements for a system to learn, not as a restriction or because I'm a biologist, because I'm not, I am software engineer and I'm more interested in solving very simple puzzles than to make an emotionally intelligent agent :). Right now I try to make an agent understand the concept of a simple serie: u(n+1)=u(n)+1 and u(0)=0. I want it to abstract the concept of natural numbers so that it can use it in more advanced problems (counting, additions, multiplications).

To learn math is a big topic. I have some idea in https://cis.temple.edu/tagit/publications/PAGI-TR-15.pdf

I would not reject the idea of implant either if I felt it was feasible but I doubt it is and that is also the reason of my question on tasks/goals. I doubt it because as the agent lives, it builds bigger and bigger concepts that are huge/complex/unreadable combinations and recombinations of raw sensations. The concept of "bird" might be represented by a concept that is built relative to multiple thousands other concepts (or even more) that are grounded to the agent sensations. So even if we wanted to implant anything (question or judgement), my question would be how would the agent represent the question itself to "understand" it? I like drawing a parallel with communication between humans: each person has a different representation of a shared idea (a bird for instance), they can share/communicate the concept with spoken words/drawings but the internal representation in each of the two brains is likely very different (because each person has its own experience and has its own hardware). For writting any high-level task or judgement we would need to know ahead of time the representation of the concepts we want the agent to act on. What if it does not even have it yet? If I draw a parallel with biological agents, it would be like putting a probe in the brain of the agent to try figure out the components of the goal we want to implant and build the goal out of it but then the goal would be crafted for this very specific agent and no one else could "understand" it...

Yes, it will be a complicated process. I agree that "concept" is basically subjective and personal, and in communication a concept is expressed by a word or a phrase, but the mapping will change from person to person, which is how misunderstanding happens. Implant, if it is used, will probably using concepts, not words, so there won't be a understanding process to convert a linguistic expression to a conceptual representation.

Where I follow you though is that the education path taken by the agent seems very important, as well as, imitation learning imo.

Yes. education and socialization shape the subjective concepts and beliefs in the "intersubjective" direction to various extents by imposing a common experience on the systems.

Regards,

Pei
Reply all
Reply to author
Forward
0 new messages