Action result learning

36 views
Skip to first unread message

Marcel Batista

unread,
Jul 8, 2013, 1:55:58 PM7/8/13
to ccrg-m...@googlegroups.com
Hello all,

I know the learning mechanisms are not yet implemented in the current version of LIDA. However, I'd like to understand how the IDA model proposes the learning of results of actions given a certain action and the current contents of conscience.

For example. Suppose a simple agent that identifies red/blue squares/circles in the environment and can press one of four different buttons (one for each shape-color pair). If the agent presses the blue-circle button when there's a blue circle in the environment, it will get some reward. In the model, where does this happens? A few points should be addressed, I think:

1) Where in the system the result of an action is recognized? Some specialized kind of codelet? How would this work in a general way?

2) Once a result is recognized, there should be new links and nodes on PAM relating nodes from the current contents of conscience to those new perceived nodes and links or should this modify the action scheme somehow, by adding that resultant node to the result section of the scheme?

3) Which memories would be involved and how to use them? I imagine Epsodic memory would be a good guess.

I have read the information I could find online but all of them are very generic on how learning happens in LIDA.

Please advise =)
Reply all
Reply to author
Forward
0 new messages