Explaining and logging reasons for emotional changes (and other stuff)

34 views
Skip to first unread message

Ben Goertzel

unread,
Jan 4, 2017, 3:01:40 AM1/4/17
to opencog, Eddie Monroe
Eddie,

one thing we were talking about yesterday at the office, was
implementing more features for introspective verbal response

i.e. not just “how are you feeling?” but “why are you feeling happy?”
… “why are you feeling that way?"

one design idea we thought of is — basically, for each “reason for
having a feeling”, a StateLink is set … then if the question is asked,
the StateLink can be consulted for its value

acceptable sorts of answers might be

(to e.g. “why are you feeling happy?”)

— “Because I heard some happy words"

— “Because you said ‘I love you’ "

— "I’m just in a good mood" (if it’s just a random happiness
fluctuation via the happiness equation)

— “Because I experienced novelty” (if the happiness was caused by
fulfillment of the novelty goal)

— “Because I saw a new face” (if seeing a new face caused fulfillment
of the novelty goal, which caused increase in happiness)

etc.

...

This also ties in with another idea we were discussing, which is to
make a Logger that is intended for reporting “conceptually significant
internal events” … like,

— “DuckDuckGo psi rule fired"
— “A new face, Face345, appeared"
— "Sentence “Who stole my head cheese?” was parsed "
— “Animation “smile_44” was executed"
— etc.


Then by looking at the output of this Logger (on the command line or
via piping it to the Web UI), one could see a running list of “what
was happening inside OpenCog”, at a high level … which would be very
useful for real-time monitoring

I suppose that extreme emotion changes should also be captured by this
Logger, e.g.

— “Happiness level increased from .2 to .8"
— “Achievement of novelty goal caused happiness level to increase from .2 to .8"


Obviously, figuring out what to log and how to report it will require
some judgment...


but it would be very useful to have this sort of high-level log of
significant internal events, as a distinct file from a lower-level log
that reports a whole lot of detailed stuff that is useful for code
debugging but less so for behavior monitoring and tuning

ben


--
Ben Goertzel, PhD
http://goertzel.org

“I tell my students, when you go to these meetings, see what direction
everyone is headed, so you can go in the opposite direction. Don’t
polish the brass on the bandwagon.” – V. S. Ramachandran

Eddie Monroe

unread,
Jan 4, 2017, 2:36:47 PM1/4/17
to Ben Goertzel, opencog
Yeah, that seems like it would be a good bang for the buck from the user experience perspective. And yeah could initially be implemented by a StateLink, and with more complex reasoning down the line. Gets me thinking about causality hypothesizing and representation in general.

There are a few of subtleties to be considered, such as that events don't influence emotions directly but rather through the lower level modulators, with the higher level emotions being a function of the modulators. Another is that in our model influencing events nudge things in one direction or another rather than setting values absolutely. So for example, hearing happy words increases positive valence, but if the robot is feeling extremely sad, it might (depending on parameter settings), take a few happy sentences to move it into feeling happy rather than sad.

I need to think through design, but wondering if you already have something in mind for the StateLink representation. For starting off, are you imagining a single StateLink for "reason for having current feeling," or perhaps a "most recent reason for feeling happy" StateLink for each emotion? I need to think this through, but if you already have something in mind, please let me know.

Thanks,
Eddie
Reply all
Reply to author
Forward
0 new messages