[NAL-7] Can sequential conjunction and parallel conjunction be semantically unified into one type of compound term?

91 views
Skip to first unread message

Tessergon Ng

unread,
Sep 7, 2024, 7:21:56 AM9/7/24
to open-nars

Recently I am learning NAL-7 by reading NAL 2nd:

Definition 11.2. In IL, there are two basic temporal relations between two events: “before” (which is irreflexive, antisymmetric, and transitive) and “when” (which is reflexive, symmetric, and transitive).

Definition 11.4. The real-time experience of a NARS is a sequence of Narsese sentences, separated by non-negative numbers indicating the interval between the arriving time of subsequent sentences, measured by the system’s internal clock.

Definition 11.5. The conjunction connector (‘∧’) has two temporal variants: “sequential conjunction” (‘,’) and “parallel conjunction” (‘;’). “(E1, E2)” represents the compound event consisting of E1 followed by E2, and “(E1; E2)” represents the compound event consisting of E1 accompanied by E2 in time.

(P166) Like an atomic event, a compound event happens in an unspecified period. Furthermore, the temporal relations between their components are “as accurate as experienced”. It means the system considers (E1; E2) true (to a degree in NAL, of course) at a moment when it considers both E1 and E2 true at that moment. Similarly, (E1, E2) is true at a moment when E2 is seen as following E1, which implies that the occurrence time of parallel conjunction like (E1; E2) is the same as its components, while the occurrence time of sequential conjunction like (E1, E2) is the same as its last component (the compound event may have more than two components).

And I had an idea the other day while understanding it:

  1. It seems that "interval" terms only appears in sequential conjunction
  2. It seems that parallel conjunction can be represented as sequential conjunction without interval separation, that is, sequence of events whose occurrence time difference is 0
  3. If we always bundle "events that happen at the same time" together, and then link these parallel conjunctionss to a sequential conjunction, does that mean that sequential conjunction and parallel conjunction can be represented by a unified term type?

For the sake of discussion, we can refer to the unified type as "temporal conjunction", and represent the "temporal conjunction" as narsese compound term (&*, A, B, C, ...).

Here are some more detailed ideas about "temporal conjunction":

  1. Core idea: If there is no "interval" separation between two events, they can be considered as concurrent/parallel/simultaneous
  2. Sequential conjunction can be unified to "temporal conjunction" containing "interval" terms, by reifing the implicit intervals such as "+1"
    • Example 1: (&/, A, +10, B, +3, C) => (&*, A, +10, B, +3, C)
    • Example 2: (&/, A, B, C) => (&/, A, +1, B, +1, C) => (&*, A, +1, B, +1, C)
  3. Parallel conjunction can be unified to "temporal conjunction" that does not contain intervals, because a "concurrent interval between events" such like (&|, A, +1, B) doesn't make sense on NAL-7
    • Example: (&|, A, B, C) => (&*, A, B, C)
  4. Combinations of sequential/parallel conjunction can be simply folded into a single-layer term
    • Example 1: (&/, (&|, A, B), +1, C) => (&*, A, B, +1, C), which has same meaning as (&*, B, A, +1, C) but different from (&*, A, C, +1, B).
    • Example 2: (&|, (&/, A, +1, B), C) => (&*, A, +1, B, C), which folded multiple sequential/parallel conjunctions into a single-layer term.
    • Note: This means that the content ordering of "temporal conjunction" can be a bit complicated - terms that are not separated by intervals can be swapped without changing the ovarall meaning, while exchanging terms between intervals changes the ovarall meaning

Are there any advantages or disadvantages to such an approach compared to the previous one?
Theoretically, can such narsese representations be truly equivalent to and more concise than previous versions? If not, are there any example of narsese that are more difficult or impossible to represent?
From an engineering perspective, can such narsese representations be more easily implemented and applied than previous versions? Will it result in more complex data structures or algorithms than previous versions?

Welcome to discuss your understanding of the design of NAL-7!

Pei Wang

unread,
Sep 8, 2024, 8:02:36 PM9/8/24
to open...@googlegroups.com
Hi Tessergon,

My current idea is to remove "interval" from Narsese, as it is based on subjective measurement and does not make much sense in communication.

The interval between A and B in A =/> B will be stored in the compound term for implication, with 1 as the default value. This interval measurement will still be used, but mostly in the "event buffer" of the new architecture.

There won't be any interval explicitly added in (A, B, C), and "1" is assumed. If there is any special need, timing events (such as "wait for 5 cycles") can be inserted. "Wait" may be introduced as a mental operator.

You are correct that "parallel conjunction can be represented as sequential conjunction without interval separation", but since interval won't be used in conjunctions, introducing a "temporal conjunction" seems unnecessary, though I don't mind giving it a try.

Regards,

Pei


Virus-free.www.avast.com

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/14bc1c4f-c1a7-4d5d-b997-a5b3ca42d6b2n%40googlegroups.com.
Message has been deleted

Tessergon Ng

unread,
Sep 14, 2024, 5:42:14 AM9/14/24
to open-nars

When studying NAL-7, I noticed the distinction between "Temporal Implication" and "Temporal Conjunction":

  • When the system receives "Temporal Implication" as a belief, the terms contain events marked with "[Future]" or "[Unhappened]" (as Predicates),
  • On the other hand, when the system receives "Temporal Conjunction" as a belief, all events in the conjunction have already occurred.

Here, how does NARS handle concepts such as "[Future]", "[Ought]", and "[Happened]"?

For example:

```narsese

<A =/> G>.

A. :|:

10

```

In OpenNARS and ONA, "G. :|:" can be derived, whereas PyNARS (OpenNARS 4) derives "G. :\:". If this is the mechanism, then how does it handle anticipations?

In the latest design of the event buffer, I only see parts that form implications; apart from different evidential bases, how else does the system distinguish between "conclusions derived internally by NARS" and "events that actually occur in external input"?

Is there always a deviation between "events inferred by NARS" and "events occurring in the external environment" (for example, concluding G. :|: without having received input G. :|:)?


Maybe mental operator ^wait could be implemented with a kind of delayed callback?

For example, <(*, {SELF}, 5) --> ^wait>. :|: could be a delayed callback from EXE: <(*, {SELF}, 5) --> ^wait>, like this:

```nars-terminal

EXE: <(*, {SELF}, 5) --> ^wait>

INFO: running 5 cycles...

OUT: <(*, {SELF}, 5) --> ^wait>. :|:

```

In this case, the "waiting" operation would act as a "delayed acknowledgment", and a more specific scenario could be expected:

```nars-terminal

IN: <(&/, A, <(*, {SELF}) --> ^left>, <(*, {SELF}, 5) --> ^wait>, <(*, {SELF}) --> ^right>) =/> GoodNar>.

IN: GoodNar! :|:

IN: A. :|:

INFO: running 1 cycles...

EXE: <(*, {SELF}) --> ^left>

COMMENT: running 5 cycles...

OUT: <(*, {SELF}, 5) --> ^wait>. :|:

INFO: running 1 cycles...

EXE: <(*, {SELF}) --> ^right>

```

Is this approach more feasible compared to using "interval" terms?

My own thoughts are:

1. Pros: No need to introduce new "interval" terms
a) No need to consider numerical handling of intervals when processing "Sequential Conjunctions"
2. Cons:
a) Still may need to introduce similar "numerical" terms such as 5 as ordinary atomic terms to indicate "how many inference cycles to wait"
b) And since they are not specialized numerical terms, NARS might need some kind of hack when dealing with these "numerical terms as Words," which could affect the consistency of "sentence processing" in the system.

I hope these thoughts will help more people :D

Pei Wang

unread,
Sep 16, 2024, 10:03:22 AM9/16/24
to open...@googlegroups.com
Hi Tessergon,

These topics are still open, so different versions handle them differently. Feel free to try your ideas.

Short comments:

* A "Temporal Implication" or "Temporal Conjunction" can have any tense. What you noticed are the usual cases.

* When the time interval between A and B in A =/> B is not explicitly indicated, the occurrence time of B is estimated according to a default. The result will be used with a time projection.

* Anticipations will be directly supported in event buffers, not in memory-based inference in general. For the latter cases, something similar happens only when a belief is checked, as the system cannot check the occurrence time of every belief in memory in each cycle.

* To distinguish input tasks/beliefs from the derived ones, the previous solution is by the length of evidential basis, though the new architecture requires all input tasks to be marked by a channel ID.

*. It is possible (though not common) for a derived conclusion to have the ":|:" tense.

*. The "^wait" operator has not been experimented with yet, so the design decision has to wait. ;-)

In general, my current idea about "timing" in operations is to depend on "mete-data" for short periods, such as "Do A, then after 3 cycles do B" will be somehow represented internally (mental operators or attributes of compound terms) and directly supported in event buffer, and use linguistic expressions ("3 seconds", "after a while", "tomorrow" etc.) for long periods (longer than buffer size) that will become conditional beliefs or goals.

If I can find the time, I will write a report on the event buffer soon.

Regards,

Pei

Patrick Hammer

unread,
Sep 25, 2024, 4:58:47 PM9/25/24
to open-nars
Hi!

I am a bit late to the party here.
But in case you decide to explicitly keep track of time deltas / intervals, your idea to unify them this way looks very reasonable!
In ONA I decided to not explicitly represent them, as the cases where the system needs to discriminate based on a particular time deltas is rare.
Cases that come to mind, like activating an energy saving lamp, are not even a case of that, as in this case even though there is a time delta, the time delta is irrelevant for most purposes. What matters is usually to condition on the right events that predict others, so as to use them as an anchor point, not to rely on the passing of time as a conditional cue by itself. Even though humans can also do the latter (with some limited accuracy), I doubt it should be part of the fundamental sensorimotor mechanisms, that's also why ONA doesn't have them. There need to be other mechanisms that explain that.

Best regards,
Patrick

"K. R. Thórisson"

unread,
Sep 27, 2024, 4:42:00 AM9/27/24
to open...@googlegroups.com
Time is of the essence :-)

What matters is usually to condition on the right events that predict others, so as to use them as an anchor point, not to rely on the passing of time as a conditional cue by itself. 


Yes - BUT: Time passing is a *neccessary* condition for antything affecting anything else. Time is an integral part of experience of the physical world. Without a keen sense of time passing, no sensible (useful, effective, efficient) representation of cause-effect can be held by an agent.

"Time ignorance" is NARS' Achilles' heel.
 
=K


On 25 Sep 2024, at 20:58, Patrick Hammer <pat...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.

Patrick Hammer

unread,
Sep 27, 2024, 4:54:08 AM9/27/24
to open-nars
I agree that time should not be ignored.
The primary concern is how time deltas are handled both in representation and to deal with timing variations.
So far comprehensive sequence learning and event prediction benchmarks have been a good guide, they turned out to be crucial to improve on that.

Best regards,
Patrick

Tessergon Ng

unread,
Sep 27, 2024, 5:33:34 AM9/27/24
to open-nars

Hi Patrick,

Thanks for your insight into temporal & procedural inference of ONA!

Recently, I have been studying the core of ONA and the mechanisms of NAL-7 and NAL-8. I believe I can share my understanding based on this work:

According to my technical research, ONA directly added occurrenceTimeOffset in Event and Implication to represent "time deltas / intervals" (one for NAL-7 and another for NAL-8). Now it seems that, given the binary-tree-based Term design in ONA, this is relatively better - although this occurrenceTimeOffset is generally only used in Temporal Implications, if we want to explicitly track "time deltas", Sequences in ONA will need to support multiple elements (like (A &/ ^Op) turning into ((A &/ +5) &/ ^Op)), which would increase the complexity of corresponding terms and reduce overall reasoning efficiency (requiring multiple rule applications to extract A).
Additionally, based on the "numeral id → name of atomic term" semantic representation used in ONA (in Narsese), the "time deltas / intervals" here might also require one number per atom ID (+1 is an atom and +5 is also an atom, or use a different encoding form than binary trees such like Atom[] {'+', 1} / Atom[] {'+', 5}), which could pose challenges to the data structure arrangement in ONA.

On a theoretical and logical level, I agree with your viewpoint.
Perhaps in ONA, occurrenceTimeOffset == 0 itself already achieves "Parallel/Simultaneous"?
Perhaps once it touches upon the level of "embodiment", NARS needs to adopt a more general, entirely new control mechanism compared to layers 1-6?
Once it involves "Anticipations", "Making Decisions", "Implication Considerations" and "Goal Suggestions", can pure "logical reasoning" from NAL 1-6 still be sufficient?
In this regard, the duality of ONA's "Semantic + Sensorimotor" mechanism (Declarative Inference for beliefs on NAL 1-6 + Temporal & Procedural Inference for goals on NAL 7~8) is deeply intriguing and logically appealing.
Since writing NARust 158, I increasingly feel that a NARS with only NAL 1-6 layers and without sensorimotor mechanisms is still incomplete; to achieve a complete AGI system, it cannot just perform abstract logical reasoning but must let the entire system "make the move" within an environment (whether virtual or real).
After studying ONA, I believe that even if this feeling does not come true, it will at least undergo a systematic validation.

Best regards,

Tessergon

Robert Johansson

unread,
Sep 27, 2024, 6:19:48 AM9/27/24
to open...@googlegroups.com
Hi Kris and everyone else,

A brief comment on your comment that "Time ignorance" is NARS' Achilles' heel.

I would argue it is a strength of NARS that it can deal with both symbolic and non-symbolic forms of reasoning over time (causality).

The current ONA implementation for example is extremely effective when it comes to what I would call non-symbolic reasoning over time. Another way to talk about it is that it is "animal level" causality at its core, implemented by temporal and procedural inference. It is possible to compile ONA with nothing left but these mechanisms and the learning it can do (reasoning over time) is very impressive already then.

Our work in Stockholm is as you know guided by Relational Frame Theory (RFT). From the perspective of RFT reasoning over time (symbolic) is an example of Arbitrarily Applicable Relational Responding (AARR). Being able to hear an instruction such as "After the bell has sounded, pick out the cake from the oven" is an example of AARR that involves temporal relations. Or, guiding someone to walk through a city "After X, go to Y, and After Y go to Z". Such reasoning, from the perspective of RFT requires learning BEFORE/AFTER relations. When learning these, they are grounded in time.

Our current rough idea how to learn this is roughly the following:
x A current "animal-level" temporal learning contingency with C(x) with some parameters x is learned
x Such situation could also act as a cue for acquiring a symbolic relation R with parameters x that the system learns at the same time as a contextual cue like the word BEFORE
x An implication along the lines of BEFORE && R(x) ==> C(x) can be derived from this

The great thing with this representation is that symbolic reasoning with BEFORE/AFTER can then occur at the symbolic level (planning in the form that a human being does), but can then (with the implication) be "grounded back" to the contingency C, so the system can execute the plan, in a new situation that it hasn't been explicitly trained for.

These are ideas in progress but the general idea is that different layers of NARS contribute to different types of reasoning.

Patrick has done extremely amazing work recently with his "AniNAL" demonstrations (which excels in animal-like learning of all sorts) - and I think it is very powerful to imagine adding symbolic abilities on top of these, so that all symbolic relations can be grounded in the learning that AniNAL can do already today.

Once again - these ideas are in progress and they will take time to explore more, and even more to have implemented solidly. 

Personally I warmly recommend investigating these things bottom-up, starting with idealized scenarios/procedures, before moving to real-world applications. 

It would be great to collaborate on this things in the future - for example in a big EU-project :D 

Robert


Patrick Hammer

unread,
Oct 3, 2024, 5:19:13 AM10/3/24
to open-nars
Hi Tessergon and everyone!

"Since writing NARust 158, I increasingly feel that a NARS with only NAL 1-6 layers and without sensorimotor mechanisms is still incomplete; to achieve a complete AGI system, it cannot just perform abstract logical reasoning but must let the entire system "make the move" within an environment (whether virtual or real)."

Sensorimotor functionality is crucial and must operate reliably. I largely agree with Kris about the limitations of previous NARS implementations, but I can assure you this is not the case with ONA, which was developed precisely to address these concerns. As a result, it has diverged from earlier temporal and procedural inference mechanisms and I am not worried about that since I can show its superior effectiveness.

Additionally, much of the knowledge traditionally provided to NARS in a somewhat 'platonic' manner should, ideally, be self-acquired by the system to be properly grounded in sensorimotor experience. Although ONA has recently incorporated some mechanisms to support this (I will start a discussion thread on this topic soon), the current capabilities are still inadequate for what an AGI system will ultimately require.

"In this regard, the duality of ONA's "Semantic + Sensorimotor" mechanism (Declarative Inference for beliefs on NAL 1-6 + Temporal & Procedural Inference for goals on NAL 7~8) is deeply intriguing and logically appealing."

This approach makes it more practical from an engineering perspective. The fact remains that no current NARS implementation can learn to play Pong without a clear separation of inference pathways—and Pong doesn’t even require multistep decision-making. For context, OpenNARS v3.0.x was the first implementation to introduce an event bag and implication tables in each concept specifically for temporal and procedural processing. ONA builds on these early efforts, enabling reliable goal derivation as well. To my knowledge, no other NARS implementation reliably learns and executes procedural knowledge. If anyone thinks otherwise, we have multiple benchmarks in our evaluation suite for comparison.

About four years ago, Christian Hahm compared ONA with OpenNARS v3.0.x in Pong ( https://github.com/ccrock4t/NARS-Pong ), which I believe was the last attempt to compare procedural reasoning between NARS implementations. The new OpenNARS design, intended to replace v3.0.x, has yet to reach a comparable (or ideally better) level of functionality.

"According to my technical research, ONA directly added occurrenceTimeOffset in Event and Implication to represent "time deltas / intervals". Now it seems that, given the binary-tree-based Term design in ONA, this is relatively better - although this occurrenceTimeOffset is generally only used in Temporal Implications"

Correct! :)

Best regards,
Patrick
Reply all
Reply to author
Forward
0 new messages