Can someone show me some of the outputs from running NARS AI? Also some questions.

122 views
Skip to first unread message

Brian Winfield

unread,
Oct 30, 2021, 9:40:20 PM10/30/21
to open-nars
Hi! My name is Brian. I'm very interested in AGI.


Can someone here show me examples of what NARS can do? What I mean is someone has wrote the code to NARS, ran it, and was given results - that made the author happy. I.e. actual results out of the running/ working code, and not any theory but actual product, to be most clear.

I did see some examples on another page, however they did not look natural or intuitive, nor did they look like someone was talking to an AI or asking an AI anything, they looked like pseudocode or theory. I wasn't sure how those could be an indication of it working if they were the outputs, because I'm not sure what the AI was answering or solving.

Throw at me all different types of results from NARS, most impressive first :p.



Also I am interested in hearing what the basic idea of how NARS works is. I know with DALL-X they are trying to train it to find patterns in a large diverse set of data from text/ image domains, which allows it to predict "what to do next" (in the restrictions of DALL-E lol) really good when given a never before seen image and/or text. Still some things missing in it, is predicts what to do but really it seems to stay tuned to the prompt it is given, hence it seems like it doesn't predict what to do really.

When I say "the basic idea of how NARS works", I mean like how it takes the input or what input it starts with, and what it does to it or gathers, and how it shapes that stuff into the resulting output...

Tony Lofthouse

unread,
Oct 31, 2021, 6:13:09 AM10/31/21
to Open-nars, brianhw...@gmail.com
Brain welcome to the group,

There is a large body of research related to NARS published over several decades. Determining what is the most impressive application is a matter of opinion so I will simply provide a link to some of the published papers related to applying NARS.

The following link provides a selection of many of the published papers (separated by subject). See the bottom of the list for application related papers.


Your questions about how NARS works have been answered from many perspectives in the published papers and related books. If you are interested, you will need to do some of the work yourself. The best and probably only way to understand NARS is to start with the theory and underlying principles. It can be difficult to know where to start so I would suggest the following paper: NARS Introduction (temple.edu)

Once you have a basic understanding then feel free to ask further questions.

Regards
Tony

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/eb8c0147-5f22-4cc6-b131-692746ef83e9n%40googlegroups.com.


--
Tony Lofthouse
Founder, Reasoning Systems Ltd

Brian Winfield

unread,
Oct 31, 2021, 5:52:53 PM10/31/21
to open-nars
Ok I looked through 3 papers from each link, probably the good ones. I also found the User Guide among other things. Is the link below to the User Guide saying that to use NARS, you need to type in the formal language and cannot talk to the AI in natural language? And that it also replies to you in that language? How do you see NARS in the future speaking and listening to natural language (text) if it currently doesn't?


Also, if its memories are stored/ and process/ output only formal logic rules and not natural sentences like we're writing, then how would you train this AI on a large amount of data if you don't specify every word's type and usage ex. "apple" and "throw". Like you'd need to hand write each one...instead of learning from many contexts?

Tony Lofthouse

unread,
Nov 1, 2021, 5:29:38 AM11/1/21
to Open-nars, brianhw...@gmail.com
Hi Brian,

It seems that you are particularly interested in Natural Language Understanding (NLU). There are three broad ways that NARS can gain knowledge: 1. by entering Narsese directly, 2. by utilising external KBs such as wordnet or concept net (plugins are available), or 3. via perception systems.

If it has not become clear already, NARS is not a Deep learning (DL) type system and does not require large amounts of data. I find it helpful to think of NARS in the same way we think of young children. NARS essentially learns in a similar way and throwing large amounts of data at a young child would not be productive, In order for NARS to learn, say NLU, it requires an education process. NARS starts out 'empty'. That said NARS can build a large concept network from a relatively small amount of experience due to its generative nature. 

If you want to look at some further examples of NLU with NARS take a look in this repository: Home · opennars/OpenNARS-for-Applications Wiki (github.com)

This link is one implementation of the NARS principles.

If you are currently hoping to interact with NARS in natural language (as you typed above) then you will be disappointed. It is not GPT3 but rather a cognitive system that reasons and adapts in real time. 

Regards
T

Brian Winfield

unread,
Nov 2, 2021, 12:52:34 AM11/2/21
to open-nars
Ok. But how would NARS be converted to allow natural language input/output if currently you have to type in/ read Narsese? Like Narsese is perhaps useful but isn't English, how would NARS read/speak just raw English? Or vision? I couldn't imagine talking to an AGI or listening to it if it spoke/ read in Narsese, or how it could handle vision using Narsese?

robe...@googlemail.com

unread,
Nov 3, 2021, 10:30:23 PM11/3/21
to open...@googlegroups.com
Hello,

you've picked something which is harsh to beginners or beginners in AGI.

I would suggest to take a look at ONA's results, you get a first impression on the Readme.md page 
This is "just one" Implementation of NARS theory, there are others too.

Usability of NAR(S) will improve with time (because I am actively working on it now like on many other things).

ONA has already a natural language processing (NLP) Interface, but it is still beginner unfriendly.

I guess you've made your first contact with narsese, the "native" language of any NARS in existence.
You can drown yourself in the theory of you like, there are enough people who will help you with that, like Patrick, Tony and Dr. Pei Wang (in that order).

You mentioned DALL-E, which has no AGI ambitions at all, NARS isn't a typical ML system or a ML system at all.

Good luck with whatever you do.

--

robe...@googlemail.com

unread,
Nov 3, 2021, 10:46:33 PM11/3/21
to open...@googlegroups.com
There seems to be some confusion here.

One can see Narsese as the native language of any NARS, just like assembly instructions are the native language to any CPU. High level programming languages and everything need to be broken down into assembly instructions.
It's the same for any NARS too, everything has to get converted to narsese too.

Vision has and will be broken down to narsese by specialized vision systems. ONA has a vision system based on contemporary DL Neural networks.

Natural language input is also broken down into narsese, see ONA.

English and Vision are "a bit" more complicated than narsese, basically the most advanced and most complicated form of any input you could throw at any proto-AGI or AGI system.

> I couldn't imagine talking to an AGI or listening to it if it spoke/ read in Narsese, or how it could handle vision using Narsese?
No one will be forced to communicate to NARS with narsese at some point , it just didn't happen or will happen soon.

----

There are many reasons that NARS is so user+beginner unfriendly, enumerating them isn't really productive.
Things will change with time.

Brian Winfield

unread,
Nov 4, 2021, 12:34:48 AM11/4/21
to open-nars

This is interesting. Did the AI or a human write the above Narsese? (I'm interested in the ones the AI wrote.)

And what basically is that Narsese saying? For example, to use a lighter to melt the toothbrush to the screw to unscrew it? And all it knew was the goal to unscrew the screw using a toothbrush?

How much data did it need to know to be able to do that if so? 1MB? 1GB? If very little (how much?), is that one of the points NARS has over GPT? That it can do more with less data?

What essentially is it doing to come up with that plan? I mean say it has a toothbrush and a screw it wants unscrewed, so it goes on to think: it wants to unscrew the screw, but its prediction here can go a few ways with some weights, and would need to be very confident if has few ideas to consider. How exactly does it come to think to melt the toothbrush into a flat head or to melt it onto the screw so it can turn the screw? Typically I could see this being done in a larger ML model if it knew similar situations. If it has little wisdom, it would need to maybe look for exactly what it was given, and determine methods and what that could lead to, if that makes sense. ?

robe...@googlemail.com

unread,
Nov 4, 2021, 2:44:04 AM11/4/21
to open...@googlegroups.com
>Did the AI or a human write the above Narsese?
Human, I call it 'hand crafting'(to make the distinction between human origin and machine origin, which is important to me).
AI did already write Narsese, for example Vision and NLP systems or basically anything interfacing with NARS.

>...
Yes it just knew that, nothing more, this example is 'stand alone'.

>That it can do more with less data?
NARS isn't a system which needs tons of data to learn a specific high level task. It can do many many high level tasks already, without any need to retrain it for each specific task or combination of tasks.

>...
The system is "just" combining the knowledge from the example to form other knowledge to fulfill its goals.

Explaining how a NARS system is working under the hood is a lengthily process and would be unsuited for this mode of communication.

I don't yet have the time to lead you though the process how it came up with this solution. Maybe later


Brian Winfield

unread,
Nov 4, 2021, 9:22:58 AM11/4/21
to open-nars
Can you show me an example it did write then? If the toothbrush one wasn't one? And a short explanation would be awesome if can.

Tony Lofthouse

unread,
Nov 5, 2021, 6:12:26 AM11/5/21
to open...@googlegroups.com
I would recommend that you take a as look at some of the many NARS tutorial videos available on YouTube. 

Search using ‘NARS AGI workshop’

Have fun

Tony

stephen clark

unread,
Nov 5, 2021, 1:22:19 PM11/5/21
to open...@googlegroups.com

Brian,

If you compile ( https://github.com/opennars/OpenNARS-for-Applications ) you can run this command from a terminal to give you an interactive session in english.

python3 english_to_narsese.py |  ./NAR.exe shell | python3 narsese_to_english.py


Stephen

Brian Winfield

unread,
Nov 5, 2021, 9:58:54 PM11/5/21
to open-nars
Wait, if NARS has to use a formal logic, then doesn't that mean *every* word it uses in its vocab has to be given rules? Like 'throw' is a verb and can be used between 2 objects, and can Not have Properties. Etc. ? Then how could you train NARS on vision without hand crafting every visual feature?? It has to be formal logic-ized...

I will try to get to running that hopefully, thanks, I had tried 1 video and not [yet] found a code running session, each video also is often 1-4 hours long...I have 2 more videos up, maybe one will do if I'm lucky.

So far I "get" and well understand NARS is supposed to solve a wide range of diverse problems, it is supposed to (given a context) predict/react to that stimuli the next "word", without wasting a ton of resources/ time to do so. But it wants to do a more efficient job than GPT and perhaps build its model a lot more "ideal" unlike perhaps GPT's which may be messy or confused. What else have you built onto NARS that makes it do more than just be a GPT-possible-replacement? I.e. does it change its own prompt and learn goals? How? Because GPT just dreams on and doesn't really search or do much itself. What is your plans to make it into AGI really then?

robe...@googlemail.com

unread,
Nov 6, 2021, 9:47:40 AM11/6/21
to open...@googlegroups.com
> Wait, if NARS has to use a formal logic, then doesn't that mean *every* word it uses in its vocab has to be given rules?
Rules have to be given but the rules can be pretty generic informally such as "X can Y" will get translated to the "can" relation <(X*Y) --> can>. .
Then it can derive that from "from can jump" and "frog can swim" the conclusion "swim is similar to jump".
We can't yet show this reasoning as this was stated here with natural sounding input and output which a user sees because it's not THAT usable. But you could already use ONA now to at least let it conclude that <swim <-> jump> which should automatically presented a user which is using a natural language interface as "swim is similar to jump".
To go back to the question, yes, it has to "boot" with some input/output program and some knowledge about grammar etc. or it has to be able to learn it on the fly (supervised by a human). But it can learn the meaning of words if it has enough knowledge and connections between the words etc.

NLP/NLU is a HUGE research sub-field, which is underexplored for NARS.
I just can tell you that NARS will exploit someday statistical correlations from tons of data just like GPT, with the difference that it will be able to reason about relationsships and all sorts of stuff, way better than any GPT system because it can do "real" reasoning with things which do not exist in the text of the internet which will be used for ML models. Why am I saying this? Because it's the logcal endpoint of capable systems which are doing NLP. It's also not the starting point, that would be ridiculous.

> Then how could you train NARS on vision without hand crafting every visual feature?? It has to be formal logic-ized...
As I said, ONA is using a Deep Learning System
>>Vision has and will be broken down to narsese by specialized vision systems. ONA has a vision system based on contemporary DL Neural networks.
a program takes the best classification from the Deep Learning Vision system and converts it to narsese to feed into the NARS, in this case into ONA.
ex: DL system return class "frog", NARS receives as input <frog --> currentObject>. :|: (think about it as "currently the current object is frog")
In this case when using a DL system not every feature has to be handcrafted, it's learned with supervised learning. "Only" the Deeplearning algorithms had to be handcrafted, which was already done by an army of programmers and engineers and scientists.

> What else have you built onto NARS that makes it do more than just be a GPT-possible-replacement?

> I.e. does it change its own prompt and learn goals? How?

It derives new goals from either external goals or derived goals, yes.
How? Well it has an array of goals, takes the best goal, combines it with learned temporal relationsships to form another goal, and so on, till the program is terminated.
It can also execute a operation(action) this way, when the goal IS the operation(action).

> What is your plans to make it into AGI really then?
There is no universal agreement (as there can't be for many more years to come) on what to do when to reach more capable systems. But some developments roughtly follow the roadmap outlined by Pei

we toched
1) Sensors and effectors lets say 3% ?
2) Natural languages lets say 0.5%
3) Education lets say 0.01%
4) Socializationlets say 0.001%, because I didn't try it but I know someone who may experiment with it
5) Hardware lets say 0.0001% because I don't know anyone who did this but I can't be sure that no one did realize NARS specific hardware ever on this planet till now end of 2021
6) Evolution lets say 0.0001% because I tried once to search for parameters of a NARS and because there is a lot to do here
These numbers are "made up" because no one knows what every actor did and how "big" the tasks are etc., but I guess they are in the right ballpark.

But the plan doesn't involve "just throw more compute and RAM on it" like of some actors, for example OpenAI, because AGI is a way more complicated problem than one could bruteforce with todays compute of all computers combined (and this will hold true for more than 50 years, there are papers on that).

Brian Winfield

unread,
Nov 6, 2021, 4:14:33 PM11/6/21
to open-nars
Ok interesting. So a lot of this seems to almost align with my understanding of AGI. We can maybe talk more later.

One last question, how will AGI/ NARS/ GPT imagine (predict) a scene like "I will raise my arm up with a fist, poke out my pinky finger, and stick the pinky on my tongue", but decide to not actually do it using its real life robot limbs? And how does it decide TO yes do it? I have a hypothesis: maybe I predict "do it" or "don't do it" along with my predicted mental images that associate to my motor limbs. But then again, I can imagine 'do it, come on, move!!' and I still don't put that pinky on my tongue, then I can decide later to go do it. Any idea how? It all must go back to the sensory prediction, if you predict the scene and predict to do it, it should just command the motors inpictured. Then the Cerebellum should make sure the limbs reach those imagined targets.

Patrick Hammer

unread,
Nov 7, 2021, 3:21:07 AM11/7/21
to open-nars
Hi Brian!

"Wait, if NARS has to use a formal logic, then doesn't that mean *every* word it uses in its vocab has to be given rules?"

While "rules" (statements in NARS) can be given, they can also be learned by the system. If for instance another agent is observed to use the word "pizza" when the waiter arrives, and gets a pizza consequently,
then the system will do the same when the waiter arrives and having a pizza is its goal. It learns such contingencies rapidly and effectively, adaptation is a key difference to "traditional" symbolic AI systems.

"Then how could you train NARS on vision without hand crafting every visual feature?? It has to be formal logic-ized..."

Logic-ized is essentially just "encoding". It can be done in an automatic way, e.g. encoding different relevant aspects which were detected in the input image, for instance with YOLOv4 which is trained to give bounding box locations and object labels at output side. Among the simplest possible encodings of such is <ObjectLabel --> [xLocationTerm yLocationTerm]>. :|:

Best regards,
Patrick

Brian Winfield

unread,
Nov 7, 2021, 1:17:46 PM11/7/21
to open-nars
And my last question?

Tony Lofthouse

unread,
Nov 9, 2021, 5:44:08 AM11/9/21
to open...@googlegroups.com
Hi Brian, this type of behavior falls under the mandate, of what psychology refers to as, Executive Function (EF). Specifically, response control is one of the factors that determines if you act or don’t act. This is an area of active research for us and some of the group are using Relational Frame Theory (RFT) as a basis to develop executive function capabilities. There is a a good deal of research available online, related to RFT and EF, if you want to explore the ideas further

Regards
Tony

Brian Winfield

unread,
Nov 9, 2021, 11:11:22 PM11/9/21
to open-nars
I just wanted your answer how NARS does it or would do it. My last reply above pretty much explains where in my question I get confused. How do you predict some imaginary text/ image/ video and /not/ do the actions, and then imagine it but /do/ do it? I know I finally "decide" to "do it", I can feel when I will do it and won't do it, when thinking of some movie plan, but I don't understand where that predicted sense comes from, I mean I have some sensory context and then predict the "do it" sensory context. Please explain in simple words how this works.

Gene

unread,
Nov 10, 2021, 1:54:28 PM11/10/21
to open-nars
I apologize if I've misunderstood Brian's question, misunderstood NARS, or both.

Question was about acting & not-acting on " I will raise my arm up with a fist, poke out my pinky finger, and stick the pinky on my tongue".

Multipart answer.

First part: Can that English statement be representated in NARS?  
I think so.  Something like this:
// At some point, NARS has this concept in its memory...
=/> (<arm --> raised> && <hand --> fist>) =/> <pinky --> poked> =/> <(pinky * tongue) --> touching>
  • The initial "=/>" up there has no argument on the left side, indicating that it's predicting a state in the future (as opposed to a state after another specific state).
  • The next clause (which is in parens) specifies states for arm & hand; the "&&" between them does not specify the order in which they became true.
  • The "=/>" after that clause means that the left side (states for arm & hand) are preconditions for what follows.
  • What follows is the state we want for the pinky.
  • And after that, we want pinky along with tongue to be "touching".
That clause up there could represent what a system considers doing.  It could also represent what it expects someone else will do.

To make the state a reality, it must execute operations.  If it doesn't execute operations, the state of the world doesn't change.  So deciding NOT to do that, is trivial: NARS simply decides not to call the operations (or more likely, it tries doing something else, doesn't get around to calling those operations).

If NARS decides to make it a reality, it'll need to execute the operations.  Depends on the operations supplied to it by whoever assembled the software & hardware.  Maybe there are detailed operations such as ^adjust_left_elbow, ^adjust_first_pinky_joint, ^adjust_second_pinky_joint, and more.  (Lots of ops, in that case.)  Or maybe there are bigger-picture operations such as ^move_hand_to (where subsystems would figure out the details).  Since the answer depends on the actual hardware & the operations for controlling it, it's difficult to give a concrete answer here.

Along the way to figuring out what operations to call, the NARS would probably need concepts in its memory so that it could figure out which operations to call with which parameters.
Again, difficult to be concrete giving what we know of the hypothetical situation, but maybe it has concepts like this:

// To make a fist, must have an empty hand
<hand --> empty> => <hand --> fist>
// When hand is empty, calling ^curl_fingers gets us a fist
<(<hand --> empty> && ^curl_fingers) ==> <hand --> fist>>
// To get an empty hand, open fingers
<^open_fingers ==> <hand --> empty>>

There's probably be a ton more concepts in its memory, but you get the idea.  There might be a group of concepts that could be used to figure out what specific, individual joint positions must be adjusted to get the pinky in a "poked" state.  And more.

As long as the rules of NARS kept pursing those inferences(? reductions? -- not sure of the right word) was the best thing to do next, it'd keep following them, calling the operations, which would adjust body position.

(As I said at top, sorry if I have misunderstood the question.  Also, I'm new to NARS so be skeptical of this description unless someone else backs it up or corrects it.)

Gene

unread,
Nov 10, 2021, 2:21:51 PM11/10/21
to open-nars
I suppose a general answer to questions of {can, how would} NARS do _this_ would be:

1. First, can the idea be represented in Narsese.  Write it down in Narsese.  (If you can't, the answer is "No, NARS can't do it".)

2. Second, as a human with a pencil & paper, apply the rules of NARS to see if you can derive that statement from a bunch of concepts expressed in NARS that you believe the system could be expected to have in memory already.

(The rules of NARS are thoroughly documented in "Non-axiomatic logic: A model of intelligent reasoning" by Pei Wang.)

Brian Winfield

unread,
Nov 10, 2021, 2:42:00 PM11/10/21
to open-nars
That seems to skip right over my question... Let me try again:

My 1st question/theory was: If I predict/imagine a scene/movie like DALL-E, that already describes visually all of the expected motor actions naturally (of my plan), all that's needed now is the limbs to act (activated is each limb in the image (its rotation and speed)) and the Cerebellum to correct numerous small errors so the input matches the desired target image in brain that is stored. No motor cortex is needed, sensory hierarchies store the same things. Only leaf 'limb nodes' are needed.

And but so my 2nd question was how can I think of the movie, and decide to do it or not do it. For example DALL-E may spit out a prediction "<a movie of raise hand> + DO IT !!", and so clearly it has a plan in mind and also is going to do it in real life. The problem with this theory though is I can think of the scene and predict the do_it and still withhold myself from acting it out in real life. I know it needs to think of a movie plan and I know it needs to predict the 'do it' memory, it can't just predict the movie plan and expect RL to handle deciding to do it or withhold itself, it must predict using sensory, because it is all context based and requires simply a prediction to decide to do it in real life, and using RL should be to control sensory prediction like Facebook's Blender chatbot (which is cooler than GPT because it uses word desires/goals, a forcing called Persona). I'm thinking now maybe my goal is strong enough that says to not_do_it, hence when I predict to do_it and I don't do it, I am actually not hearing it but in the background the weight is stronger still. To understand what I mean, see Facebook's Blender chatbot. It uses such, called Persona, forcing certain words in the background no matter if heard/said other words. So: it is against me no matter if I scream in my brain 'do it' constantly and in different ways ex. 'act it!', 'move!', 'initiate plans!'.

Patrick Hammer

unread,
Nov 11, 2021, 2:01:58 AM11/11/21
to open...@googlegroups.com
Hi Brian!

@Gene: I agree, going through step by step is a good idea.
It's also good from a development perspective as when certain example-relevant inferences aren't made, it reveals information about the implementation. It can indicate whether it was due to insufficient knowledge and resources (knowledge the system did not yet have or learn, the way attention is allocated, or information which was forgotten), potentially missing inference rules, or potential software issues.

"My 1st question/theory was: If I predict/imagine a scene/movie like DALL-E, that already describes visually all of the expected motor actions naturally (of my plan), all that's needed now is the limbs to act (activated is each limb in the image (its rotation and speed)) and the Cerebellum to correct numerous small errors so the input matches the desired target image in brain that is stored. No motor cortex is needed, sensory hierarchies store the same things. Only leaf 'limb nodes' are needed."

I don't see a question in there. But as a comment, I would say motor control is a combination of learning sensorimotor contingencies (as NARS is designed to), and partly innate control circuits distributed across the spine and muscles. A deer for example can stand within 10 minutes after birth, it doesn't have to learn that, while for humans there is clearly also a learning aspect to it.

In any case, from a NARS perspective I suggest using appropriate motor procedures and control circuits whenever sufficient, which it can invoke via operations and learn to use in different contexts, and combine in various ways as building blocks. Also operations can take arguments, which can act as parameters for the motor procedures. It can also generalize them, an example of this is example4.nal in the examples folder of the ONA repository.

"And but so my 2nd question was how can I think of the movie, and decide to do it or not do it. For example DALL-E may spit out a prediction "<a movie of raise hand> + DO IT !!", and so clearly it has a plan in mind and also is going to do it in real life. The problem with this theory though is I can think of the scene and predict the do_it and still withhold myself from acting it out in real life. I know it needs to think of a movie plan and I know it needs to predict the 'do it' memory, it can't just predict the movie plan and expect RL to handle deciding to do it or withhold itself, it must predict using sensory, because it is all context based and requires simply a prediction to decide to do it in real life, and using RL should be to control sensory prediction like Facebook's Blender chatbot (which is cooler than GPT because it uses word desires/goals, a forcing called Persona). I'm thinking now maybe my goal is strong enough that says to not_do_it, hence when I predict to do_it and I don't do it, I am actually not hearing it but in the background the weight is stronger still. To understand what I mean, see Facebook's Blender chatbot. It uses such, called Persona, forcing certain words in the background no matter if heard/said other words. So: it is against me no matter if I scream in my brain 'do it' constantly and in different ways ex. 'act it!', 'move!', 'initiate plans!'."

You seem to confuse prediction and decision.
Predicting an outcome is not the same as wanting it to happen.
A predicted outcome does not lead to a decision, unless it's an desired outcome / a goal event.
We have a complete story about how prediction and decision needs to interact. Our paper "Goal-directed procedure learning" addresses this in detail, and these principles have been demonstrated to work in multiple implementations.

Best regards,
Patrick





You received this message because you are subscribed to a topic in the Google Groups "open-nars" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/open-nars/9AlMMgqLvFA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/ff0fa89d-388f-42b7-bc71-72e2b9d5476an%40googlegroups.com.

Brian Winfield

unread,
Nov 11, 2021, 2:17:07 PM11/11/21
to open-nars
I don't think that paper will answer my question even if I had the time to read it all... I don't think you understand my question.

When I think of doing something I can be doing it or not, there is a switch is all. I was asking about that.

Also I only wanted a short answer, the question is a very simple one to answer, no need for any paper.

Brian Winfield

unread,
Nov 11, 2021, 2:56:24 PM11/11/21
to open-nars
Well, from the comment you have given me, although very short, it seems you are saying what I'm saying: to enact it requires it be its goal. That makes 2 of us then. Ok that's all I wanted to know. Thank you for helping.
Reply all
Reply to author
Forward
0 new messages