Matthew,
Thanks for the questions. The original version of this agent was
used in the programming exercises of the LIDA tutorial at the
AGI'11 conference. It was meant to show off the basics of the
framework and was part of a series of exercises on how to add and
configure the elements of an Agent. The agent is quite myopic as
far as its "range of vision" goes. The agent's health attribute
decrease over time and once below 0.66, will compel the agent to
move about more and eat the hamburgers. In addition it should
"flee" from the monkeys which can harm it.
Version 1.2 of this agent involves 1) a more sophisticated action
selection module, inspired by Maes' "Behavior Network", 2) an
updated ProceduralMemory, and 3) some updates to the
AttentionCodelets which are all changes with version 1.2 of the
framework.
I do not believe the agent will exhibit any learning, there's a
chance that the AttentionCodelets' base-level activation may be
changing -- I don't know for sure offhand. In version 1.2 only the
AttentionCodeletModule has a learning algorithm that could
potentially alter the agent's mind, more specifically its
AttentionCodelets, during execution.
Generally speaking, all FrameworkModules implementing the
BroadcastListener interface receive each conscious broadcast. So
if you want to try experimenting with the learning algorithms you
could extend implemented receiveBroadcast() methods (calling super
first). PAMImpl, EpisodicMemoryImpl, and ProceduralMemoryImpl are
currently in much need of learning algorithms.
Best,
Ryan