OpenNARS human like chatbot

431 views
Skip to first unread message

neurop...@gmail.com

unread,
Jul 11, 2017, 6:14:06 AM7/11/17
to open-nars
Hi, I made this architecture which I am hoping to implement soon. Haven't figured out the details quite yet. 

I made a diagram describing the architecture (little messy) credits to the other person on this forum for the idea of a diagram.

I am also still new to OpenNARS so please let me know what you think of this idea. 

Just to clarify, The emotional module in this archiecture is different from NARS's existing emotional functions.
It will basically be a mix of neural networks and other programs to determine if the current situation is desirable and feed that desire value (reward) back to OpenNARS

The initial goal for OpenNARS in this architecture will always be maximise the desire value/reward.



neurop...@gmail.com

unread,
Jul 12, 2017, 6:09:17 AM7/12/17
to open-nars
I have been thinking, maybe that is not the right way to input the sentences into the system because the system can't learn from the user how to respond because it doesn't get the original sentences, it gets a processed version. 

Pei Wang

unread,
Jul 12, 2017, 6:34:01 AM7/12/17
to open-nars

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+unsubscribe@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
Visit this group at https://groups.google.com/group/open-nars.
For more options, visit https://groups.google.com/d/optout.

neurop...@gmail.com

unread,
Jul 12, 2017, 9:29:57 AM7/12/17
to open-nars
Few questions about that, 
What about the procedural aspects of language? EG : 

John: I am sleeping in a bed 
OpenNARS: Why are you sleeping in a bed?

That would of course be an example of curiosity my idea is that because the goal 
is always to maximise the reward function given by the external emotional module and asking questions (in this case)
is considered desirable by the emotional module, so the NARS system has learned that that is desirable thus in this conversation a derived goal
would be asking questions about John's bed.

However, I still don't fully get how the procedural part of a chatbot would work in OpenNARS. How would it learn to respond correctly?
On Wed, Jul 12, 2017 at 6:09 PM, <neurop...@gmail.com> wrote:
I have been thinking, maybe that is not the right way to input the sentences into the system because the system can't learn from the user how to respond because it doesn't get the original sentences, it gets a processed version. 


Hi, I made this architecture which I am hoping to implement soon. Haven't figured out the details quite yet. 

I made a diagram describing the architecture (little messy) credits to the other person on this forum for the idea of a diagram.

I am also still new to OpenNARS so please let me know what you think of this idea. 

Just to clarify, The emotional module in this archiecture is different from NARS's existing emotional functions.
It will basically be a mix of neural networks and other programs to determine if the current situation is desirable and feed that desire value (reward) back to OpenNARS

The initial goal for OpenNARS in this architecture will always be maximise the desire value/reward.



--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.

Pei Wang

unread,
Jul 12, 2017, 5:55:07 PM7/12/17
to open-nars
NARS is not driven by a reward function given from the outside. For its motivational mechanism, see http://www.cis.temple.edu/~pwang/Publication/motivation.pdf, and for how it handles procedural knowledge, see http://www.degruyter.com/view/j/jagi.2012.3.issue-3/v10229-011-0021-5/v10229-011-0021-5.xml?format=INT

Regards,

Pei
 

To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+unsubscribe@googlegroups.com.

neurop...@gmail.com

unread,
Jul 13, 2017, 11:42:07 AM7/13/17
to open-nars
When having a OpenNARS chatbot, Isn't for example they, he and she a problem?
Because one day you can tell it : They are .... and the next day mention they again but in a different context. Does openNARS have any way of distinguishing? 

Pei Wang

unread,
Jul 13, 2017, 7:24:38 PM7/13/17
to open-nars
We are not there yet, but pronouns will be handled just like how we handle it --- usually correctly, but may have troubles.

Regards,

Pei


neurop...@gmail.com

unread,
Aug 11, 2017, 8:40:54 AM8/11/17
to open-nars
Question for patrick hammer,
Have you actually introduced Narlice into an IRC channel and if so how did that work out?

Thanks!

Patrick Hammer

unread,
Aug 23, 2017, 2:54:52 PM8/23/17
to open-nars
Sorry for the late response..
"Narlice" was just a simple test for defining a "is included in the current sentence" property for a product-representation of sentences.
It is part of what can be used to build chatbots like Alice, but it is not what we aim for.
What we usually use is to have different representations for words and concepts they represent, see: https://cis.temple.edu/~pwang/Publication/NLP.pdf

Best regards,
Patrick

neurop...@gmail.com

unread,
Dec 10, 2017, 1:24:50 PM12/10/17
to open-nars
How would you input a because sentence into Open Nars?
For example, a penguin is a bird because it has wings.
And how would I ask Open Nars, why is a penguin a bird?

Pei Wang

unread,
Dec 11, 2017, 10:43:35 AM12/11/17
to open-nars
A sentence like "P because of Q" usually can be represented in Narsese as two judgments: "P" and "Q ==> P", and the "Why P?" question is usually represented as "?x ==> P", with an implicit presumption "P".

Regards,

Pei
 

neurop...@gmail.com

unread,
Apr 15, 2018, 4:34:55 PM4/15/18
to open-nars
How would ... means ... be represented?
For example unreliable means undependable, and how would one deal with their likeness yet small differences?

Tony Lofthouse

unread,
Apr 15, 2018, 4:45:27 PM4/15/18
to open...@googlegroups.com

Hi,

 

There are two ways to handle this:

 

1. two terms can be similar as in your example:  <unreliable <-> undependable>    where <-> is similarity and the associated truth value would define the degree of similarity

 

2. Two statements can be equivalent:   <<(a * b) --> smaller>  <=> <(b * a) --> larger>> where <=> is equivalence and the degree of equivalence is given by the truth value

 

regards

--

You received this message because you are subscribed to the Google Groups "open-nars" group.

To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.

neurop...@gmail.com

unread,
Apr 17, 2018, 10:53:29 AM4/17/18
to open-nars
Thanks for the clarification, another thing could anyone help me understand how nars learns to predict this function?
https://youtu.be/pdaUNX7iKlQ?t=6m35s
Thanks

Tony Lofthouse

unread,
Apr 18, 2018, 4:39:34 AM4/18/18
to open...@googlegroups.com

Hi, learning the function in the video is based on sequence learning.

 

OpenNARS learns sequences by forming compound terms of incoming or derived events:

 

For example, given the following input sequence:

 

a :|:

b:|:

c:|:

 

The system can learn the following patterns:

 

(&/, a, b)              where &/ is a sequential conjunction or sequence

<a =/> b>             where =/> is a predictive implication

(&/, b, c)

<b =/> c>

(&/, a, b, c)

<(&/, a, b) =/> c>

(&/, a, c)

<a=/> c>

 

Note: the time difference between the events is also significant. In this case it is assumed that they occur far enough apart to be considered sequential rather than concurrent.

 

One interpretation of a predictive implication is that it is a hypothesis that can be tested:

 

A =/> B                 where given A, B will follow at some time after

 

The degree of certainty of the hypothesis is represented by the truth value of the statement

 

These hypotheses can be further constrained by the inclusion of preconditions:

 

(&/, A, B) =/> C  where A would be a pre-condition of the hypothesis B =/> C         You can think of pre-conditions as a context in which the hypothesis will be true (to a degree)

 

So learning a function is a matter of learning a number of hypotheses and preconditions that predict the next most likely event.

 

There is a lot more going on internally but this captures the key theoretical points.

 

This mechanism is generalised in OpenNARS and is how all procedural learning takes place.

 

Kind regards

--

Reply all
Reply to author
Forward
0 new messages