specified signatures for links

43 views
Skip to first unread message

Linas Vepstas

unread,
Jul 16, 2020, 1:45:41 PM7/16/20
to Alexey Potapov, Alexey Potapov, Alexey Potapov, opencog
Listening to your talk, you mentioned "specified signatures for links" -- and "type-checking".

There is some code for this, it is incomplete, and unused, mostly because no one was interested in it. For details, see

and
and

--linas
--
Verbogeny is one of the pleasurettes of a creatific thinkerizer.
        --Peter da Silva

Alexey Potapov

unread,
Jul 16, 2020, 4:07:00 PM7/16/20
to Linas Vepstas, opencog
Hi Linas,
Thanks for listening. Yes, I know about these links and I mentioned that there is some machinery for types and signatures already introduced in Atomese. And I meant precisely these links. Maybe I should mention them on slides explicitly to avoid confusion.
Of course, in the context of the current design, the incomplete functionality of type checking, etc. can be considered as minor issues, and the lack of interest in it is the reason for its incomplete state for sure. But my point was that these minor issues are actually connected to more interesting and more AGI-ish issues, which could be taken into account in a new design...

-- Alexey

чт, 16 июл. 2020 г. в 20:45, Linas Vepstas <linasv...@gmail.com>:

Linas Vepstas

unread,
Jul 23, 2020, 12:07:41 PM7/23/20
to Alexey Potapov, opencog
Hi Alexey,

OK.

I'm kind-of a fan of user-driven design --

I'd like to see opencg users be concrete, specific, and say "well, gee, I wish I could do this-or-that, how can I do it?" and then it becomes clear how to answer .. the answer can be either "that's easy, just do XYZ", or the answer is  "gosh, I see what you mean; we have to fix that".

I'm saying this because here in opencog-land, we have a habit of not doing that ...

By the way, you should know: Values were invented to encapsulate video and audio streams (and to off-load processing of those streams to DSP's and GPU's, e.g. by neural-net algorithms) .. I mention this because this is an extremely-incomplete feature, but maybe useful for current focus and development.

The example, from Hanson Robotics days, was "detect applause" or "detect hand-clapping" or "detect whistling and shouting" ... so that the robot could respond. The atomsese for this was supposed to be something like this:

(AudioStream "left-side-microphone")

;; this attaches to a ROS node and holds the audio stream in for that microphone. See the current code base "RandomStream" for prototype example.

Processing would be, for example,

(AndLink
   (GreaterThan 0.5
        (Amplitude (LowPassFilter
               (GetValue (Predicate "left Mic") (Concept "robot")))))
   (GreaterThan 0.5
        (Amplitude (HiPassFilter
               (GetValue (Predicate "left Mic") (Concept "robot")))))

where    (GetValue (Predicate "left Mic") (Concept "robot"))   is the key-value store location that returns    (AudioStream "left-side-microphone") 

(I guess I am now calling these "connectors", but whatever...)

The above would be compiled down to some code that runs on a DSP -- it would be the "applause detector" -- broad-band white-noise detector.

Obviously, the goal of this representation was to let MOSES (or AS-MOSES) learn new/better filter representations, which could be compiled and sent down to the DSP to try out.

There was some talk with Ralf (one of the Hong Kong guys) to do something similar for video processing on GPU's with the neural-net code of that era. I think someone (mandeep?) built a "salience detector", but the atomese for it is lost to the sands of time.  It took video in, and spit out a couple of x-y-z locations for the robot to turn and look at.

I'm mentioning this because you seem to still be interested in neural nets, based on Ben's hyperion talk, and so we've got this old infrastructure that was intended for neural nets, but it sits there, unloved and unused...

-- Linas
Reply all
Reply to author
Forward
0 new messages