ALife Agent example

65 views
Skip to first unread message

Matthew Lohbihler

unread,
Oct 3, 2012, 5:20:03 PM10/3/12
to ccrg-m...@googlegroups.com
Hi Ryan,

Is there a summary somewhere of what this example is meant to illustrate? My workstations has run over 2.5M ticks, but the behaviour of the agent appears to be roughly the same as when it started. Is some level of learning occurring, or is the agent static? (Looking at the BasicSensoryMemory class, all of the fields are refreshed with each runSensors execution, so the memory is transient there. Not sure if there are more profound effects elsewhere though.)

Thanks,
Matthew

Ryan J. McCall

unread,
Oct 3, 2012, 6:09:46 PM10/3/12
to ccrg-m...@googlegroups.com, ccrg-m...@googlegroups.com
Matthew,

Thanks for the questions. The original version of this agent was used in the programming exercises of the LIDA tutorial at the AGI'11 conference. It was meant to show off the basics of the framework and was part of a series of exercises on how to add and configure the elements of an Agent. The agent is quite myopic as far as its "range of vision" goes. The agent's health attribute decrease over time and once below 0.66, will compel the agent to move about more and eat the hamburgers. In addition it should "flee" from the monkeys which can harm it.

Version 1.2 of this agent involves 1) a more sophisticated action selection module, inspired by Maes' "Behavior Network", 2) an updated ProceduralMemory, and 3) some updates to the AttentionCodelets which are all changes with version 1.2 of the framework.

I do not believe the agent will exhibit any learning, there's a chance that the AttentionCodelets' base-level activation may be changing -- I don't know for sure offhand. In version 1.2 only the AttentionCodeletModule has a learning algorithm that could potentially alter the agent's mind, more specifically its AttentionCodelets, during execution.

Generally speaking, all FrameworkModules implementing the BroadcastListener interface receive each conscious broadcast. So if you want to try experimenting with the learning algorithms you could extend implemented receiveBroadcast() methods (calling super first). PAMImpl, EpisodicMemoryImpl, and ProceduralMemoryImpl are currently in much need of learning algorithms.

Best,

Ryan
--
You received this message because you are subscribed to the Google Groups "Cognitive Computing Research Group - CCRG" group.
To view this discussion on the web visit https://groups.google.com/d/msg/ccrg-memphis/-/4U2PhzAJ1jMJ.
To post to this group, send email to ccrg-m...@googlegroups.com.
To unsubscribe from this group, send email to ccrg-memphis...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ccrg-memphis?hl=en.
Are you looking for Best Of The Best? Find here the best resources for your
search and needs!
http://click.lavabit.com/o7ycr9c81q63ppm6xrr1sainmfcmeprsjo7xn93kd9hrywxcxhub/


--
Ryan J. McCall
Ph.D. Student, Dept. of Computer Science
Cognitive Computing Research Group
Institute for Intelligent Systems
The University of Memphis
Reply all
Reply to author
Forward
0 new messages