Probabilistic Logic Network suited for capturing sparse causalities ?

29 views
Skip to first unread message

Aymeric

unread,
Apr 7, 2022, 12:30:58 AM4/7/22
to opencog
Hello everybody,


I am looking for a learning paradigm (and the system / framework / network that goes with it), in order to accompish the following:


Consider an agent evolving in a 2D/3D world, and at each time step we have a compact representation of its input in the form of 1000s binary variables (X1...X1000.), with each binary variable potentially representing a piece of knowledge about the input.


Now, imagine that there is a temporal causal relation between X1 and X9, i.e when X1 is 1 at time step t, then X9 becomes 1 at time step t+1


So my question is: how to capture this kind of propositional knowledge ?

I have been thinking about various ML paradigms such as Markov Logic Network (each node is of the FO type), or Dynamic Bayesian Network (allows to represent temporal dependencies btween nodes),

But I was wondering if a Probabilistic Logic Network was not more suited.
One of the problem that the framework should solve in particular is that: X1 -> X9 (between two consecutive time steps) can be seen very sparsly (because for ex: X1 = true will be rarely true while the agent evolves), and yet we "know" that X1 -> X9.

Best

Aymeric
Reply all
Reply to author
Forward
0 new messages