There have indeed been claims that all of intelligence can be reduced to transducers. See Pylyshyn's works for elaboration, but I'll quote a relevant discussion from [1]:
It has often been assumed (and at one time it was argued explicitly by Fodor 1980a) that an account of cognitive processes begins and ends with representations. The only exception to this, it was assumed by many (including, implicitly, Pylyshyn 1984), occurs in what are called transducers (or, in the biological literature, ‘‘sensors’’), whose job is to convert patterns of physical energy into states of the brain that constitute the encodings of the incoming information. [...] Given the view that the bridge from world to mind resides in transduction, the problem then becomes to account for how transduced properties become representations, or semantically evaluable states and, in particular, how they come to have the particular representational content that they have; how, for example, when confronted with a red fire engine, the transducers of the visual system generate a state that corresponds to the percept of a red fire engine and not a green bus.
[...]
At one time it was seriously contemplated that this was because we had a ‘‘red-fire-engine transducer’’ that caused the ‘‘red-fire-engine cell’’ to fire, which explained why that cell corresponded to the content red-fire-engine. This clearly will not work for many reasons, one of which is that once you have the capacity for detecting red, green, pink, etc., and fire-engines, buses, etc., you have the capacity to detect an unbounded number of things, including green fire-engines, pink buses, etc. In other words, if you are not careful you will find yourself having to posit an unlimited number of transducer types, because without some constraints transduction becomes productive. Yet even with serious constraints on transduction (such as proposed in Pylyshyn 1984, chap. 9) the problem of content remains. How do we know that the fire-engine transducer is not actually responding to wheels or trucks or engines or ladders, any of which would do the job for any finite set of fire engines? This problem is tied up with the productivity and systematicity of perception and representation. Failure to recognize this is responsible for many dead-end approaches to psychological theorizing (Fodor and Pylyshyn 1981; Fodor and Pylyshyn 1988).
As such, it becomes difficult to compare the "intelligence" of different systems using different transducers. Kristinn Thorisson and colleagues were working on a Task Theory for AGI, but I'm unaware about its current state.
I find the principles of NARS fascinating both because of the multiple ways it deviates from standard logic and how it can be universally applied as the work by Christian Hahm and colleagues on Visual Perception shows. There's also work on Speech Processing using NARS[2].
On the other hand, I find myself taking the position that even though we can learn everything using a single framework doesn't necessarily mean that we should, for reasons of compute efficiency. Particularly, if the goal is to develop human-like intelligent systems for the known environments we grow up and live in, then I'm inclined to look at development (ontogeny) and evolution (phylogeny) to see what humans are endowed with. There has been plenty of work on this in recent decades[3,4].
I just had a look at the Table of Contents of [3], and unfortunately, it seems that even this line of work is missing an Emotion/Drive Theory. I'm finding it more plausible to explain behavior as resulting towards the fulfilment of certain drives - and that's what any self-maintaining system would need to do - rather than particular goals and criteria set by the designer. Again, you can learn them from scratch, but we had evolution work it out for us over millions of years across trillions of individuals in each generation. Plus, an intelligent system would also need an understanding of emotions because it should also be able to learn from other humans which have emotions.
[3] also seems to assume that we are philosophical zombies[5], but even ignoring phenomenal consciousness and focusing on access consciousness, it seems it has nothing significant to say about its role in learning. About consciousness, I'm particularly attracted to Global Workspace Theory[6], but I am in no position to evaluate it critically; but there have been cognitive architectures based on GWT and its neural equivalents.
This has become a rant. To tie it back to NARS, one of the things I'd like to try some day is to come up with a high-dimensional / neural-network equivalent of NARS. That, or figuring out the appropriate interface between modern neural networks and NARS. NARS would be involved in cognition, while neural networks such as Segment-Anything or Template-based Object Detection[7-8] in perception; but may be even the neat integration itself requires a neural equivalent of NARS. I am unaware if anyone is already working on a neural equivalent of NARS.
References: