Recently I am learning NAL-7 by reading NAL 2nd:
Definition 11.2. In IL, there are two basic temporal relations between two events: “before” (which is irreflexive, antisymmetric, and transitive) and “when” (which is reflexive, symmetric, and transitive).
Definition 11.4. The real-time experience of a NARS is a sequence of Narsese sentences, separated by non-negative numbers indicating the interval between the arriving time of subsequent sentences, measured by the system’s internal clock.
Definition 11.5. The conjunction connector (‘∧’) has two temporal variants: “sequential conjunction” (‘,’) and “parallel conjunction” (‘;’). “(E1, E2)” represents the compound event consisting of E1 followed by E2, and “(E1; E2)” represents the compound event consisting of E1 accompanied by E2 in time.
(P166) Like an atomic event, a compound event happens in an unspecified period. Furthermore, the temporal relations between their components are “as accurate as experienced”. It means the system considers (E1; E2) true (to a degree in NAL, of course) at a moment when it considers both E1 and E2 true at that moment. Similarly, (E1, E2) is true at a moment when E2 is seen as following E1, which implies that the occurrence time of parallel conjunction like (E1; E2) is the same as its components, while the occurrence time of sequential conjunction like (E1, E2) is the same as its last component (the compound event may have more than two components).
And I had an idea the other day while understanding it:
For the sake of discussion, we can refer to the unified type as "temporal conjunction", and represent the "temporal conjunction" as narsese compound term (&*, A, B, C, ...).
Here are some more detailed ideas about "temporal conjunction":
Are there any advantages or disadvantages to such an approach compared to the previous one?
Theoretically, can such narsese representations be truly equivalent to and more concise than previous versions? If not, are there any example of narsese that are more difficult or impossible to represent?
From an engineering perspective, can such narsese representations be more easily implemented and applied than previous versions? Will it result in more complex data structures or algorithms than previous versions?
Welcome to discuss your understanding of the design of NAL-7!
--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/14bc1c4f-c1a7-4d5d-b997-a5b3ca42d6b2n%40googlegroups.com.
When studying NAL-7, I noticed the distinction between "Temporal Implication" and "Temporal Conjunction":
Here, how does NARS handle concepts such as "[Future]", "[Ought]", and "[Happened]"?
For example:
```narsese
<A =/> G>.
A. :|:
10
In OpenNARS and ONA, "G. :|:" can be derived, whereas PyNARS (OpenNARS 4) derives "G. :\:". If this is the mechanism, then how does it handle anticipations?
In the latest design of the event buffer, I only see parts that form implications; apart from different evidential bases, how else does the system distinguish between "conclusions derived internally by NARS" and "events that actually occur in external input"?
Is there always a deviation between "events inferred by NARS" and "events occurring in the external environment" (for example, concluding G. :|: without having received input G. :|:)?
Maybe mental operator ^wait could be implemented with a kind of delayed callback?
For example, <(*, {SELF}, 5) --> ^wait>. :|: could be a delayed callback from EXE: <(*, {SELF}, 5) --> ^wait>, like this:
```nars-terminal
EXE: <(*, {SELF}, 5) --> ^wait>
INFO: running 5 cycles...
OUT: <(*, {SELF}, 5) --> ^wait>. :|:
```
In this case, the "waiting" operation would act as a "delayed acknowledgment", and a more specific scenario could be expected:
```nars-terminal
IN: <(&/, A, <(*, {SELF}) --> ^left>, <(*, {SELF}, 5) --> ^wait>, <(*, {SELF}) --> ^right>) =/> GoodNar>.
IN: GoodNar! :|:
IN: A. :|:
INFO: running 1 cycles...
EXE: <(*, {SELF}) --> ^left>
COMMENT: running 5 cycles...
OUT: <(*, {SELF}, 5) --> ^wait>. :|:
INFO: running 1 cycles...
EXE: <(*, {SELF}) --> ^right>
```
Is this approach more feasible compared to using "interval" terms?
My own thoughts are:
1. Pros: No need to introduce new "interval" terms
a) No need to consider numerical handling of intervals when processing "Sequential Conjunctions"
2. Cons:
a) Still may need to introduce similar "numerical" terms such as 5 as ordinary atomic terms to indicate "how many inference cycles to wait"
b) And since they are not specialized numerical terms, NARS might need some kind of hack when dealing with these "numerical terms as Words," which could affect the consistency of "sentence processing" in the system.
I hope these thoughts will help more people :D
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/d574987f-a484-4bf2-8275-0c1eb11aef2bn%40googlegroups.com.
What matters is usually to condition on the right events that predict others, so as to use them as an anchor point, not to rely on the passing of time as a conditional cue by itself.
--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/e1999eee-2355-4669-b40a-19807dd23398n%40googlegroups.com.
Hi Patrick,
Thanks for your insight into temporal & procedural inference of ONA!
Recently, I have been studying the core of ONA and the mechanisms of NAL-7 and NAL-8. I believe I can share my understanding based on this work:
According to my technical research, ONA directly added occurrenceTimeOffset in Event and Implication to represent "time deltas / intervals" (one for NAL-7 and another for NAL-8). Now it seems that, given the binary-tree-based Term design in ONA, this is relatively better - although this occurrenceTimeOffset is generally only used in Temporal Implications, if we want to explicitly track "time deltas", Sequences in ONA will need to support multiple elements (like (A &/ ^Op) turning into ((A &/ +5) &/ ^Op)), which would increase the complexity of corresponding terms and reduce overall reasoning efficiency (requiring multiple rule applications to extract A).
Additionally, based on the "numeral id → name of atomic term" semantic representation used in ONA (in Narsese), the "time deltas / intervals" here might also require one number per atom ID (+1 is an atom and +5 is also an atom, or use a different encoding form than binary trees such like Atom[] {'+', 1} / Atom[] {'+', 5}), which could pose challenges to the data structure arrangement in ONA.
On a theoretical and logical level, I agree with your viewpoint.
Perhaps in ONA, occurrenceTimeOffset == 0 itself already achieves "Parallel/Simultaneous"?
Perhaps once it touches upon the level of "embodiment", NARS needs to adopt a more general, entirely new control mechanism compared to layers 1-6?
Once it involves "Anticipations", "Making Decisions", "Implication Considerations" and "Goal Suggestions", can pure "logical reasoning" from NAL 1-6 still be sufficient?
In this regard, the duality of ONA's "Semantic + Sensorimotor" mechanism (Declarative Inference for beliefs on NAL 1-6 + Temporal & Procedural Inference for goals on NAL 7~8) is deeply intriguing and logically appealing.
Since writing NARust 158, I increasingly feel that a NARS with only NAL 1-6 layers and without sensorimotor mechanisms is still incomplete; to achieve a complete AGI system, it cannot just perform abstract logical reasoning but must let the entire system "make the move" within an environment (whether virtual or real).
After studying ONA, I believe that even if this feeling does not come true, it will at least undergo a systematic validation.
Best regards,
Tessergon
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/67667DE5-9E83-45F4-A355-8D448A354940%40gmail.com.
This approach makes it more practical from an engineering perspective. The fact remains that no current NARS implementation can learn to play Pong without a clear separation of inference pathways—and Pong doesn’t even require multistep decision-making. For context, OpenNARS v3.0.x was the first implementation to introduce an event bag and implication tables in each concept specifically for temporal and procedural processing. ONA builds on these early efforts, enabling reliable goal derivation as well. To my knowledge, no other NARS implementation reliably learns and executes procedural knowledge. If anyone thinks otherwise, we have multiple benchmarks in our evaluation suite for comparison.
About four years ago, Christian Hahm compared ONA with OpenNARS v3.0.x in Pong ( https://github.com/ccrock4t/NARS-Pong ), which I believe was the last attempt to compare procedural reasoning between NARS implementations. The new OpenNARS design, intended to replace v3.0.x, has yet to reach a comparable (or ideally better) level of functionality.