--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+unsubscribe@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
Visit this group at https://groups.google.com/group/open-nars.
For more options, visit https://groups.google.com/d/optout.
On Wed, Jul 12, 2017 at 6:09 PM, <neurop...@gmail.com> wrote:
I have been thinking, maybe that is not the right way to input the sentences into the system because the system can't learn from the user how to respond because it doesn't get the original sentences, it gets a processed version.Hi, I made this architecture which I am hoping to implement soon. Haven't figured out the details quite yet.I made a diagram describing the architecture (little messy) credits to the other person on this forum for the idea of a diagram.I am also still new to OpenNARS so please let me know what you think of this idea.Just to clarify, The emotional module in this archiecture is different from NARS's existing emotional functions.It will basically be a mix of neural networks and other programs to determine if the current situation is desirable and feed that desire value (reward) back to OpenNARSThe initial goal for OpenNARS in this architecture will always be maximise the desire value/reward.
--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+unsubscribe@googlegroups.com.
Thanks!
Hi,
There are two ways to handle this:
1. two terms can be similar as in your example: <unreliable <-> undependable> where <-> is similarity and the associated truth value would define the degree of similarity
2. Two statements can be equivalent: <<(a * b) --> smaller> <=> <(b * a) --> larger>> where <=> is equivalence and the degree of equivalence is given by the truth value
regards
--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
Hi, learning the function in the video is based on sequence learning.
OpenNARS learns sequences by forming compound terms of incoming or derived events:
For example, given the following input sequence:
a :|:
b:|:
c:|:
The system can learn the following patterns:
(&/, a, b) where &/ is a sequential conjunction or sequence
<a =/> b> where =/> is a predictive implication
(&/, b, c)
<b =/> c>
(&/, a, b, c)
<(&/, a, b) =/> c>
(&/, a, c)
<a=/> c>
Note: the time difference between the events is also significant. In this case it is assumed that they occur far enough apart to be considered sequential rather than concurrent.
One interpretation of a predictive implication is that it is a hypothesis that can be tested:
A =/> B where given A, B will follow at some time after
The degree of certainty of the hypothesis is represented by the truth value of the statement
These hypotheses can be further constrained by the inclusion of preconditions:
(&/, A, B) =/> C where A would be a pre-condition of the hypothesis B =/> C You can think of pre-conditions as a context in which the hypothesis will be true (to a degree)
So learning a function is a matter of learning a number of hypotheses and preconditions that predict the next most likely event.
There is a lot more going on internally but this captures the key theoretical points.
This mechanism is generalised in OpenNARS and is how all procedural learning takes place.
Kind regards
--