Hi,
The pure pattern miner test is a pure OpenCog thing, doesn't require the Unity3d game project. It has two modes, a single machine mode, and a distributed mode, see
http://wiki.opencog.org/wikihome/index.php/Pattern_Miner They both have been merged into the main Opencog branch before the end of last year. If you follow this wiki page, please use the one in the main branch. But because I haven't checked it again since this year, so maybe some parts in OpenCog it depends on have changed and the Pattern Miner is still last year's version. I will try to test it again later when I have time.
Yes, the current developing branch is integrating with the Embodiment client to recognize patterns by unsupervised learning from other scripted NPCs' behaviour in the Unity3d game world for a demo in my PhD thesis. Because I got the impression that the new Embodiment system is in Ros now, and it is for connecting to the robots instead of Unity3d game world. There are 3 reasons I chose to use the old embodiment + unity3d game world:
1. The demo I am making is to do unsupervised learning from a noisy and multi-contextual environment, needs to be easy to script the NPC agents to do a lot of random different high level actions, like walk to something, pick up something, eat something, drop something, even using magic to heal someone, open chests with a key....It will be hard to script a lot of NPC robots to do all of these.
2. If we want to connect the Pattern Miner to current robotic project, it will also require a computer vision processor to recognize low-level robotic actions first before all the perceived knowledge can all be turned into Atoms in the AtomSpace, which I assume won't become matured and work perfectly with the whole OpenCog in near future.
3. For a more visible complete demo, after the Pattern Miner found a pattern it need a planner to work together to use the new found pattern in planning, and then a new plan can be sent to the client for execution. Since I am not there anymore, and to debugging the planner and the whole action execution pipeline from OpenCog embodiment to a robot and feedback to OpenCog is very difficult for me. And is probably also require a lot of complex robotic mechanism implementation which I am not good at.
But once the perceiving modules and action execution modules are all implemented and integrated well with the robots and OpenCog, it should not be hard to integrate my current developing stuff with the new Embodiment system.
BTW, for anyone who is interested about what exactly my current demo can do:
The motivation:
Any other kinds of learning in AI all require manually pick the learning materials, examples for a certain task for the agent to learn from. But real human level intelligence is able to just observe the whole world, without pre-assigned any tasks, just learning every useful things from the world. And once a new task is given, a human can search their memory and find out a solution to solve it. It's also different from sequence recognize, a sequence for a task is not always continuous, it can be disturbed. So the Pattern Miner is aiming at figure out the cause and effect for every single action, and then using planning / reasoning to make them into a sequence when given a task.
The demo content:
There are two different tasks in the game world: open a chest, heal an animal.
There are a lot of NPCs doing random actions including the two tasks in the game world at the same time, for example: NPC A first pick up an apple; and then walk to a rabbit, apply healing magic to heal it; walk to a yellow key; walk around randomly; drop or eat the apple; walk to a yellow chest; open the yellow chest with the yellow key it is holding.
At some point, give the AI agent a goal: open the black chest. Then it would be able to know from Pattern Miner that using the same color of key can open the same color of chest. If given a goal to heal the cat instead, the AI agent will also know from the Pattern Miner that applying healing magic to an animal can heal it.