Question about the LIDA tutorial Exercises

閲覧: 106 回
最初の未読メッセージにスキップ

Ildefons

未読、
2011/10/19 8:58:572011/10/19
To: Cognitive Computing Research Group - CCRG
Dear LIDA experts,

my name is Ildefons Magrans. I am research scientist at the Icelandic
Institute for Intelligent Machines ( http://www.iiim.is ). In the
context of my research (attention control), I am interested in using
the LIDA framework to evaluate different attention strategies.

I have been reading the LIDA model and framework papers, I have
installed the framework and run the AGI 11 tutorial exercises. Just
recently I started following the notes "LIDA tutorial Exercises
Version 1.0 by Javier Snaider and Ryan McCall".

I am advancing with the tutorial. I am now working on the advanced
exercise 3 of the basic agent. I am stuck at creating my own feature
detector. Could you send me some code snipped for the init() and
detect methods of this new class?

Thank you,
Ildefons

Ryan J. McCall

未読、
2011/10/20 11:34:182011/10/20
To: Cognitive Computing Research Group - CCRG
Hi Ildefons,

I don't have a code snippet for this advanced exercise. The Shape
feature detector is counting the number of pixels that are not a
certain color in a matrix of pixels. Depending on the count it detects
a circle or a square. So this exercise is asking for a detector that
detects the situation when all the pixels are white.

Also if you have some questions about the init() or
runThisFrameworkTask() methods, there is some documentation on them in
"The LIDA Tutorial" pdf file that comes with the example projects
distribution.

Hope that helps,

Ryan

On Oct 19, 7:58 am, Ildefons <ildefons.magr...@gmail.com> wrote:
> Dear LIDA experts,
>
> my name is Ildefons Magrans. I am research scientist at the Icelandic
> Institute for Intelligent Machines (http://www.iiim.is). In the

delbert

未読、
2011/10/20 12:47:392011/10/20
To: Cognitive Computing Research Group - CCRG
Hello Lidefons,

are you asking about the relationship of the feature detectors to the
rest of the LIDA Framework?

in the init() mehod you need to set up your parameters for interacting
with the Framework. those parameters should agreee with the
definitions in the agent.xml file, under the section: <module
name="PerceptualAssociativeMemory">.

then the result of running the feature detector needs to be a node
that can drive an attention codlet defined under the section: <module
name="AttentionModule">.

this drives the action of releasing the button in sensory motor
memory, under the section: <module name="SensoryMotorMemory">.

so, my feature detector looks like:

/
*******************************************************************************

*******************************************************************************/
package myagent.featuredetectors;

import java.util.HashMap;
import java.util.Map;

import edu.memphis.ccrg.lida.pam.tasks.BasicDetectionAlgorithm;

public class NoShapeFeatureDetector extends BasicDetectionAlgorithm {

private int backgroundColor = 0xFFFFFFFF;

private Map<String, Object> smParams = new HashMap<String,
Object>();

@Override
public void init() {
super.init();
smParams.put("mode","all");

backgroundColor = (Integer) getParam("backgroundColor",
0xFFFFFFFF);
}

@Override
public double detect() {
int[] layer = (int[])
sensoryMemory.getSensoryContent("visual",smParams);

for(int i=0;i<layer.length;i++){
if(layer[i]!=backgroundColor){
return 0.0;
}
}

return 1.0;
}
}

and the agent.xml declarations look like:
...
<module name="PerceptualAssociativeMemory">
...
<initialTasks>
...
<task name="EmptyDetector">
<tasktype>NoShapeDetector</tasktype>
<ticksperrun>3</ticksperrun>
<param name="backgroundColor" type="int">-1</
param>
<param name="node" type="string">empty</param>
</task>
</initialTasks>

<initializerclass>edu.memphis.ccrg.lida.pam.BasicPamInitializer</
initializerclass>
</module>

and

<module name="AttentionModule">

<class>edu.memphis.ccrg.lida.attentioncodelets.AttentionCodeletModule</
class>
<associatedmodule>Workspace</associatedmodule>
<associatedmodule>GlobalWorkspace</associatedmodule>
<taskspawner>defaultTS</taskspawner>
<initialTasks>
...
<task name="EmptyCodelet">
<tasktype>BasicAttentionCodelet</tasktype>
<ticksperrun>5</ticksperrun>
<param name="nodes" type="string">empty</param>
<param name="refractoryPeriod" type="int">30</
param>
<param name="initialActivation" type="double">1.0</
param>
</task>
</initialTasks>
</module>

and

<module name="SensoryMotorMemory">

<class>edu.memphis.ccrg.lida.sensorymotormemory.BasicSensoryMotorMemory</
class>
<associatedmodule>Environment</associatedmodule>
<param name="smm.1">action.pressOne,algorithm.press1</
param>
<param name="smm.2">action.pressTwo,algorithm.press2</
param>
<param name="smm.3">action.releasePress,algorithm.releasePress</
param>
<taskspawner>defaultTS</taskspawner>

<initializerclass>edu.memphis.ccrg.lida.sensorymotormemory.BasicSensoryMotorMemoryInitializer</
initializerclass>
</module>

finally, NoShapeDector (feature detector) needs to be identified in
the <tasks> section of FactoryData.xml:

...
<tasks>
...
<task name="NoShapeDetector">
<class>myagent.featuredetectors.NoShapeFeatureDetector</class>
<ticksperrun>5</ticksperrun>
<associatedmodule>SensoryMemory</associatedmodule>
<associatedmodule>PerceptualAssociativeMemory</associatedmodule>
<param name="backgroundColor" type="int">-1</param>
<param name="node" type="string">empty</param>
</task>
...


does this help?

delbert
> > Ildefons- Hide quoted text -
>
> - Show quoted text -

Javier Snaider

未読、
2011/10/20 14:06:542011/10/20
To: Cognitive Computing Research Group - CCRG
Hi Lidefons and Delbert,

Thanks Delbert for your answer. A couple of clarifications.

The parameters for feature detectors are inside the task that define it (in initialtasks of PerceptualAssociativeMemory module)

you can also define parameters for the entire module, but these are use for the PerceptualAssociativeMemory instead for a specific feature detector.

The feature detectors are domain specific, so depends on the environment what do you need to "detect". So the parameters that you can use are free. You can reuse one of the base implementations, but this is not a requirement. Of course all feature detectors should excite at least one node, so "node" or "nodes" are a common parameters (The base implementations read one of them). In the ALife project you have several examples.


Javier



--
You received this message because you are subscribed to the Google Groups "Cognitive Computing Research Group - CCRG" group.
To post to this group, send email to ccrg-m...@googlegroups.com.
To unsubscribe from this group, send email to ccrg-memphis...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ccrg-memphis?hl=en.


全員に返信
投稿者に返信
転送
新着メール 0 件