Reasoning about succession

25 views
Skip to first unread message

Patrick Hammer

unread,
Nov 29, 2025, 6:43:31 PMNov 29
to open-nars
Hi!

In my earlier experiments I often relied on temporal reasoning to study sequential dependencies, but I focused too heavily on numerical timing rather than relative temporal structure. One reason why I went that way is that without numerical timing, deciding when an expected event is "too late" becomes an unmeasured ad-hoc choice, and the easy way out of not having Anticipation at all, or too inaccurate (e.g. tolerating the entire buffer window by default) seems incomplete.

More recently I have examined a simpler, purely relative formulation compatible with negative evidence attribution. It is not intended to replace the numerical approach, but it serves as a clean abstraction for exploratory purposes in sequence learning, also since the perfect approach for numerical handling and Anticipation has yet to be found.


Definition (Immediate-Succession Implication).
Let A =+> B denote a immediate-succession-implication between events A and B.

  • Positive evidence:
    A supporting observation for A =+> B occurs when event A is observed and the immediately next observed event is B.

  • Negative evidence:
    A contradicting observation for A =+> B occurs when event A is observed and the immediately next observed event is not B (some other event).

The precise duration between A and its successor is ignored; as only the direct successor in the observation stream is considered. Similarly (A &+ B) then denotes that event B is observed as the direct next event after A, rather then being a sequence in time that can "skip" unmentioned intermediate events.

I have implementation of that idea in case anybody wants to play with this idea.

Best regards,
Patrick

Pei Wang

unread,
Nov 30, 2025, 11:21:22 AMNov 30
to open...@googlegroups.com
Patrick,

Sounds good! Both quantitative (numerical) and qualitative (relative) time representations are needed; the former is mostly used in low-level perception, and the latter in high-level perception/cognition.

As for "immediate successor", will it depend on the level of description (granularity)? For example, even if the raw experience is "A, B, C",  "A =+> B" may get negative evidence if (B, C) is perceived as a compound.

To me, "A =/> B" is already "A is followed by B" with the time in between ignored. I guess the difference from your suggestion is that "A =/> B" still gets positive evidence from "A, C, B" (with a lower confidence caused by time-projection) when C is ignored (as accidental noise or low-intensity sensation). We can compare the two treatments in experiments to see if this "immediate-succession" should be handled as a qualitative difference from "succession". I'm afraid whether a succession is considered as "immediate" is context-sensitive -- think about the descriptions in a book on history.

Regards,

Pei

Virus-free.www.avast.com

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/open-nars/96c735aa-903b-418d-9a39-77f5103bf842n%40googlegroups.com.

Patrick Hammer

unread,
Dec 3, 2025, 1:44:57 PMDec 3
to open...@googlegroups.com
Hi Pei!

I totally agree, the case you mentioned is exactly the difference.
I see =/> as fundamental, while I agree =+> is just a simplified special case for experimentation since it is easier to just look at the next event rather than to handle the general case. It clearly cannot replace =/> as with observational inputs there are often time delays and intermediate events from multiple sensor streams, but it can work for restricted domains like natural language.

I have a prototype based on it, which can remember sentences even though it is fed token by token of a sequence with overlapping sub-structures (e.g. c b x vs. d b y with <(&+ c b) =+> x> and <(&+ d b) =+> y> to disambiguate respectively). It learns such predictive implications, with the right disambiguating sequences as precondition, and is able to chain together the learned "local" hypotheses to reconstruct a larger "global" pattern. While that is not enough to generate novel texts or learn the structure of language or meaning of words, it allows it to at least remember larger sequences when repeatedly observed and in a way that it could "complete" the pattern by itself when observing some "prefix" of the pattern.
Maybe that could be interesting to Bowen, as NARS implementations previously were not able to do that.

Best regards,
Patrick


Reply all
Reply to author
Forward
0 new messages