Hi Alexey,
OK.
I'm kind-of a fan of user-driven design --
I'd like to see opencg users be concrete, specific, and say "well, gee, I wish I could do this-or-that, how can I do it?" and then it becomes clear how to answer .. the answer can be either "that's easy, just do XYZ", or the answer is "gosh, I see what you mean; we have to fix that".
I'm saying this because here in opencog-land, we have a habit of not doing that ...
By the way, you should know: Values were invented to encapsulate video and audio streams (and to off-load processing of those streams to DSP's and GPU's, e.g. by neural-net algorithms) .. I mention this because this is an extremely-incomplete feature, but maybe useful for current focus and development.
The example, from Hanson Robotics days, was "detect applause" or "detect hand-clapping" or "detect whistling and shouting" ... so that the robot could respond. The atomsese for this was supposed to be something like this:
(AudioStream "left-side-microphone")
;; this attaches to a ROS node and holds the audio stream in for that microphone. See the current code base "RandomStream" for prototype example.
Processing would be, for example,
(AndLink
(GreaterThan 0.5
(Amplitude (LowPassFilter
(GetValue (Predicate "left Mic") (Concept "robot")))))
(GreaterThan 0.5
(Amplitude (HiPassFilter
(GetValue (Predicate "left Mic") (Concept "robot")))))
where (GetValue (Predicate "left Mic") (Concept "robot")) is the key-value store location that returns (AudioStream "left-side-microphone")
(I guess I am now calling these "connectors", but whatever...)
The above would be compiled down to some code that runs on a DSP -- it would be the "applause detector" -- broad-band white-noise detector.
Obviously, the goal of this representation was to let MOSES (or AS-MOSES) learn new/better filter representations, which could be compiled and sent down to the DSP to try out.
There was some talk with Ralf (one of the Hong Kong guys) to do something similar for video processing on GPU's with the neural-net code of that era. I think someone (mandeep?) built a "salience detector", but the atomese for it is lost to the sands of time. It took video in, and spit out a couple of x-y-z locations for the robot to turn and look at.
I'm mentioning this because you seem to still be interested in neural nets, based on Ben's hyperion talk, and so we've got this old infrastructure that was intended for neural nets, but it sits there, unloved and unused...
-- Linas