Current status of the Space/Time Server

91 views
Skip to first unread message

Joel Pitt

unread,
May 14, 2019, 10:29:48 PM5/14/19
to opencog
Hi all,

I may be jumping into some OpenCog development, and I'm keen to know the current status of SpaceServer. I know the code in the opencog repo is deprecated, and there is a plan written by Linas from Oct 2018 here: https://wiki.opencog.org/w/SpaceServer

I also see there is a semi-recent branch from Misgana here: https://github.com/misgeatgit/opencog/tree/time-ocmap

Any other clues on people/links I should contact or be aware of?

Cheers,
Joel ( Long time no post! )

Linas Vepstas

unread,
May 15, 2019, 6:48:41 PM5/15/19
to opencog, Vitaly Bogdanov
Hi Joel,

Good to hear from you! Yes, please please please do this!

On Tue, May 14, 2019 at 9:29 PM Joel Pitt <joel...@gmail.com> wrote:
Hi all,

I may be jumping into some OpenCog development, and I'm keen to know the current status of SpaceServer. I know the code in the opencog repo is deprecated,

As far as I know, it is NOT deprecated, on the contrary, its fully functional, complete, finished, done, works-as-designed, etc. (It is also possible/likely that it can be improved, enhanced, optimzed, etc. but that's not currently needed.(?)) Unit tests are sorely missing.
 
and there is a plan written by Linas from Oct 2018 here: https://wiki.opencog.org/w/SpaceServer

I think that this wiki page is still partly/mostly(?) correct, but I would need to re-read, re-review it.

I also see there is a semi-recent branch from Misgana here: https://github.com/misgeatgit/opencog/tree/time-ocmap

Misgana implemented the core functions .. twice. Once, and that was merged, and a second time, because he forgot that he'd done it the first time :-/

Here's what is missing, and it is kind-of a "separate project" -- what is missing is a "common sense" API, that can provide yes/no/maybe-fuzzy answers to questions like "next-to", "above", "below", "in front of".

Right now, the spacetime server stores 3D/4D locations and that is fine and that "works perfectly" (TM) as far as I know, so that code is "done"(TM).  What it doesn't do is answer questions like "is A in front of B?"  Hacking this up isn't really hard ... figure out where you are, where A is, where B is, do some 3D math, get an answer. Done.

There are two "tricky" bits to this.  The first is common-sense: "in front of" only makes sense if A and B are about the same size, and differ in distance by inches or feet, not nanometers (unless you are talking about nanometer-sized things.)  For now, just ignore/hack around this with common-sense programming hacks. We can deal with the metaphysics later on.
 
The second "tricky bit" is to implement the API correctly -- specifically, as PredicateNodes. This is not hard either -- it is relatively straight-forward code.  The goal of this API is to allow ghost to evaluate the following

   EvaluationLink
          PredicateNode "is-in-front-of ?"
          ListLink
                 ConceptNode "big red box"
                 HumanBodyNode "visible person A who is probably Joel"

and get back a true/false/don't-know answer. This should be easy, because you can find  the 3D locations of both from a "well known" Value attached to those atoms.  Bingo, you're done. Misgana already wrote the code (and it is checked in and it "works"(TM)) that fetches those Values from the spacetime server.  

What is missing is a ROS component that grabs 3D positions from the Hanson Robotics 3D-object-tracking stack and jams it into the space server. If you know python, and ROS, and get the Hanson Robotics people to tell you which ROS node/ROS pipeline they're using for vision processing, this is easy.  ROS is great for modular, pipeline processing like this.  (But you don't have to use ROS. As long as you feed the spacetime server with 3D info from something, somewhere, it will "just work"(TM).)
 
I think that doing all this is really fairly easy, straight-forward, and does not require any whizzy or difficult theory or mad programming skillz or any kind of woo-woo. At least, not at the basic level of wiring it up, and being able to use ghost to talk about what the robot camera sees.  Of course, later on, one could get fancy, but for now, we just need the basic  core infrastructure coded up.  Once done, the language-to-perception will "just work"(TM).  Done this way, I think its very modular - after the base is done, then people can then go off and do whizzy neural-net-ish vision processing later on, and ghost will provide the natural-language question-answering "for free", no extra work.

Any other clues on people/links I should contact or be aware of?

Not as far as I know. I'm willing to walk you through the details. Its really not hard, and shouldn't take all that long. It's even a good warmup for getting back into opencog.

Well, talk to Ben, I guess, and well, maybe Vitaly Bogdanov & team, who invented something completely different.  There is a back-story you need to know.  Vitaly, please correct me where I'm wrong or misunderstand.

So the core idea of having common-sense PredicateNodes for the the prepositions (in-front/behind/above/bigger/smaller/etc.) evolved last spring.  The goal of PredicateNodes was to make them fit in with EvaluationLinks, etc. and fit in with OpenPsi, fit in with ghost, fit in with the pattern matcher, the pattern-miner, etc. Fit in with everything in the current opencog architecture.  Due to general mis-communication,  Vitaly et al. never actually heard about this design.  They implemented something completely different and totally incompatible. Actually, I'm not even sure any more about what it is they built.  It seems like a tragic mistake, because the net result is a system that is incompatible with .. everything else.  I really really want to get back to the core idea of using PredicateNodes for everything, and hiding all neural-net magic under the PredicateNodes, and not somewhere else (certainly not in python/C++/scheme code API's)  How to rescue that effort, and get it to work with ghost, well, that is a different conversation. If we could just have the basic PredicateNode api working -- this would be future-proof, and extendabale and I think its just not that hard to do.  So yes, please please do it!

I wrote too much. Sorry.

-- Linas

--
cassette tapes - analog TV - film cameras - you

Vitaly Bogdanov

unread,
May 16, 2019, 9:17:07 AM5/16/19
to opencog
They implemented something completely different and totally incompatible. Actually, I'm not even sure any more about what it is they built.  It seems like a tragic mistake, because the net result is a system that is incompatible with .. everything else.  I really really want to get back to the core idea of using PredicateNodes for everything, and hiding all neural-net magic under the PredicateNodes, and not somewhere else (certainly not in python/C++/scheme code API's)  How to rescue that effort, and get it to work with ghost, well, that is a different conversation. If we could just have the basic PredicateNode api working -- this would be future-proof, and extendabale and I think its just not that hard to do.  So yes, please please do it!

Some explanation to clarify the difference.

In first case system updates coordinates of the objects constantly. It also constantly analyses the scene and keeps in mind that some objects like "cube" is here and that it has "red" color. When system needs answer question "What is on the left of the red cube?" it queries the atomspace and calculates predicate which finds "red cube" and calculates "left-of" predicate using coordinates of the cube and other objects on scene.

In second case system doesn't need constantly adding all attributes of all objects on scene to the atomspace to be ready processing queries. Instead it uses GroundedSchemaNode or GroundedPredicateNode to calculate predicates "is-red" and "left-of" on the fly using visual features of objects it sees. And error is backpropagated through the GroundedSchemaNode and GroundedPredicateNode to improve predicate accuracy.

Both ways of request processing do not exclude each other. Main difference is that in second case you don't need to pre-calculate all attributes you may need to answer the question.

Linas Vepstas

unread,
May 16, 2019, 12:50:30 PM5/16/19
to opencog, Jamie Diprose, Desmond Germans

Hi Desmond, Jamie,
If you guys are no longer working on ROS sensor-fusion/object-identification & tracking code, let me know. If it is someone else, please put me in touch?  We still have ongoing discussions on how to handle 3D position data in opencog, see below. I would like to get everyone "on the same page" as it were.

Hi Vitaly,

On Thu, May 16, 2019 at 8:17 AM Vitaly Bogdanov <vsb...@gmail.com> wrote:
They implemented something completely different and totally incompatible. Actually, I'm not even sure any more about what it is they built.  It seems like a tragic mistake, because the net result is a system that is incompatible with .. everything else.  I really really want to get back to the core idea of using PredicateNodes for everything, and hiding all neural-net magic under the PredicateNodes, and not somewhere else (certainly not in python/C++/scheme code API's)  How to rescue that effort, and get it to work with ghost, well, that is a different conversation. If we could just have the basic PredicateNode api working -- this would be future-proof, and extendabale and I think its just not that hard to do.  So yes, please please do it!

Some explanation to clarify the difference.

Ah, well, there is less of a difference now, it seems, although you still do not use the Value API that I keep urging you to use ...


In first case system updates coordinates of the objects constantly. It also constantly analyses the scene and keeps in mind that some objects like "cube" is here and that it has "red" color.
 
Yes, the above is exactly what the spaceserver was designed to do -- it can keep track of objects, constantly.  Of course, you do no have to use the space server. -- it's use is optional, and not using it might have been the right design decision.  I don't know how or why you made this decision to use/not-use it.
 
When system needs answer question "What is on the left of the red cube?" it queries the atomspace and calculates predicate which finds "red cube" and calculates "left-of" predicate using coordinates of the cube and other objects on scene.

This is where I think you got it wrong. You must NOT query the atomspace! That is the wrong way to use the atomspace!  The atomspace was never meant for this kind of constant update!

The intended design is that there is a well-known Value that is attached to red-cube.  That Value is knows how to obtain the correct 3D location from the SpaceServer (or some other server, e.g. your server). That Value ONLY fetches the location when it is asked for it, and ONLY THEN.   This is the code that Misgana wrote. Twice. You should NEVER directly shove 3D positions into the atomspace!!

The idea is that atompsace stays static, slowly changing, with relatively few, low-frequency changes.  i.e. when a new object becomes visible, only then is the atomspace updated. When an object is no longer visible/forgotten-about/untracked, only then is the atomspace updated.  All of the high-frequency jittery, fast-update object tracking happens in the space server (or in your server... or some other server) -- Again, I don't care about which server is tracking object locations .. I don't care very much about those implementation details. The only thing I care about is that the 3D locations are accessed with a Value.  That way, we can have a common, uniform API for all visual-processing and 3D spatial reasoning systems.  (for example, some 3D spatial reasoning might be done on imaginary, pretend objects. The sizes, colors, locations of those objects will not come from your vision server; they will come in through some other subsystem.  However, the API will be the same. 

I should be able to tell the robot "Imagine that there is a six-foot red cube directly in front of Joel.  Would Joel still be visible?" and get the right answer to that. When the robot imagines this, it would not use the space-server, or your server, or the ROS-SLAM code for that imagination. However, since the access style/access API is effectively the same, it can still perform that spatial reasoning and get correct answers.


In second case system doesn't need constantly adding all attributes of all objects on scene to the atomspace to be ready processing queries.

Yes. Exactly!  That is the whole point of the space-server + value design!  That is why I keep telling you to use it!
 
Instead it uses GroundedSchemaNode or GroundedPredicateNode to calculate predicates "is-red" and "left-of" on the fly using visual features of objects it sees. And error is backpropagated through the GroundedSchemaNode and GroundedPredicateNode to improve predicate accuracy.

Oh. OK. Yes, that is a consistent design.  But I am guessing that you are not using generic left-of code to access values. I'm guessing you wrote some kind of custom code for this.  I would prefer to have generic left-of code, that worked for arbitrary positional data-sources (including imaginary ones), and this email is about convincing Joel (or Misgana) to write that generic, works-for-any-vision-system code.

Sensor fusion is another reason to have generic code -- besides your vision system, there is also a distinct vision system, based on ROS, including SLAM, etc. that Jamie Diprose is/has created, that is ALSO doing vision processing, and obtaining 3D coordinates and sizes for objects.  We want to unify that data with your data.  We can perform that sensor fusion in the space server, or in some other server; I don't particularly care. What I do care about is that the locations and sizes of objects are available through Values, so that ALL reasoning subsystems have access to them.

In the meantime, it is certainly possible to do this hack:

DefineLink
     DefinedPredicateNode   "is-left-of"
     GroundedPredicateNode "Vitalys-left-of-code"

and then, on an as-needed basis, swap in other API's:

DefineLink
     DefinedPredicateNode   "is-left-of"
     PredicateNode "some-other-value-based left-of"

We could even use StateLink instead of DefineLink to switch between different sensory subsystems.  In other words,  ghost should use (DefinedPredicateNode   "is-left-of") instead of (GroundedPredicateNode "Vitalys-left-of-code") when it does its language processing.  (I assume that Amen and/or Man Hin are involved in the language spatial-reasoning side of things... right?).
 

Both ways of request processing do not exclude each other. Main difference is that in second case you don't need to pre-calculate all attributes you may need to answer the question.

Yes, the reason that the Value system was created was to avoid having to precalculate anything.  That is why it exists. That is why I want you to use it.  It's generic.

Vitaly Bogdanov

unread,
May 17, 2019, 11:23:41 AM5/17/19
to opencog
Hi Linas,

The intended design is that there is a well-known Value that is attached to red-cube.  That Value is knows how to obtain the correct 3D location from the SpaceServer (or some other server, e.g. your server). That Value ONLY fetches the location when it is asked for it, and ONLY THEN.   This is the code that Misgana wrote. Twice. You should NEVER directly shove 3D positions into the atomspace!!

Am I right that implementation of this design is still not merged? In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.
 
That way, we can have a common, uniform API for all visual-processing and 3D spatial reasoning systems.  (for example, some 3D spatial reasoning might be done on imaginary, pretend objects. The sizes, colors, locations of those objects will not come from your vision server; they will come in through some other subsystem.  However, the API will be the same. 

I should be able to tell the robot "Imagine that there is a six-foot red cube directly in front of Joel.  Would Joel still be visible?" and get the right answer to that. When the robot imagines this, it would not use the space-server, or your server, or the ROS-SLAM code for that imagination. However, since the access style/access API is effectively the same, it can still perform that spatial reasoning and get correct answers.

Yes, I think I see your point. We could wrap NN by value and return its results on demand to be analyzed by generic predicates. Such approach allows getting results from visual features on demand. But back propagation seems to not work with this approach.

For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.
 
As far as I see such unification is not possible without changing predicate logic or introducing outer system to calculate predicates and keep computation graph.

Linas Vepstas

unread,
May 18, 2019, 2:59:38 PM5/18/19
to opencog
On Fri, May 17, 2019 at 10:23 AM Vitaly Bogdanov <vsb...@gmail.com> wrote:
Hi Linas,

The intended design is that there is a well-known Value that is attached to red-cube.  That Value is knows how to obtain the correct 3D location from the SpaceServer (or some other server, e.g. your server). That Value ONLY fetches the location when it is asked for it, and ONLY THEN.   This is the code that Misgana wrote. Twice. You should NEVER directly shove 3D positions into the atomspace!!

Am I right that implementation of this design is still not merged?

It was merged half-a-year-ago, a year-ago.

In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.

Yes, but I think you misunderstand how it works. It is updated only when the value is asked for. If no one asks for the value, it is never updated.  This design allows the current value to be managed externally, in some other component, and it is then brought into the atomspace "on demand", only when some user of the atomspace tries to look at the value.   Think of it as a "smart value" -- when the user asks the value "what number are you holding?", the value isn't holding anything; instead, it goes to the external server, gets the latest, and returns that.

In this case, OctoMapValue never changes, until you ask for it's current value; then it goes to the space-server, gets the current value, and returns that.

The examples directory contains a much simpler example: "RandomValue" -- which uses `random()` as the "external system".  You're supposed to cut-n-paste that code, replace `random()` by your server, and that's it -- done.  That's all that  OctoMapValue is - just a small, simple wrapper.

Also - right now, OctoMapValue only holds a 3D position, I think. We need an API for size, width, height, orientation.  Maybe not all of that, right away, but at least a basic-size API.  So that natural language expressions like "See that small thing on the table? Point your finger at it" work.

Having these other attributes does not change the overall design. The Values don't update until they are accessed.

 
That way, we can have a common, uniform API for all visual-processing and 3D spatial reasoning systems.  (for example, some 3D spatial reasoning might be done on imaginary, pretend objects. The sizes, colors, locations of those objects will not come from your vision server; they will come in through some other subsystem.  However, the API will be the same. 

I should be able to tell the robot "Imagine that there is a six-foot red cube directly in front of Joel.  Would Joel still be visible?" and get the right answer to that. When the robot imagines this, it would not use the space-server, or your server, or the ROS-SLAM code for that imagination. However, since the access style/access API is effectively the same, it can still perform that spatial reasoning and get correct answers.

Yes, I think I see your point. We could wrap NN by value and return its results on demand to be analyzed by generic predicates. Such approach allows getting results from visual features on demand. But back propagation seems to not work with this approach.

Yes, not everything can work with generic predicates. But it will work with two important cases: traditional robotics sensory subsystems, and with imagined (virtual?) worlds e.g. from a blue-print, engineering drawing, math-textbook figure or similar abstract system that provides spatial info, without being "visual" in the usual sense.

For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.

? I don't understand. What do you need to propagate back? 
 
As far as I see such unification is not possible without changing predicate logic or introducing outer system to calculate predicates and keep computation graph.

Yes, it's fine to *also* have externally-calculated predicates, and that is what you have, more-or-less.  Having generic predicates that work with the space server would also be nice to have (and then, to have the space server hooked up to ROS, which it isn't, right now) 

Oh, and finally, of course, to hook up the predicates (generic, or external) to language (ghost). For this last step, we need the ghost maintainers -- Amen, Man Hin, others, to look at think about and comment about what predicates they could actually use and support easily and quickly.   It's easy to make a list of prepositional and comparative phrases -- there are about 100 of them (above, below, behind, next-to, bigger-then, taller, .. etc) and we should start with some reasonable subset of these, hook them into the chatbot, and get things like question-answering going.  As far as I know, no one has begun any of this work yet, right?

-- Linas

Ben Goertzel

unread,
May 18, 2019, 4:55:42 PM5/18/19
to opencog
***
For example if generic predicate uses value to verify whether object
is "red", then what it expects from value is pair of doubles (mean,
confidence). If our value returns such pair of doubles it forgets all
computation graph which led to this result. It means that we are not
able to propagate an error back through the calculation graph and
improve next result.

As far as I see such unification is not possible without changing
predicate logic or introducing outer system to calculate predicates
and keep computation graph.
***

It seems analogous to what is done in Haskell to support backtracking
and Prolog-type cut and so forth. There one does use an "external"
system to manage historical records, but this external system is
hidden behind the monadic interface,

http://okmij.org/ftp/papers/LogicT.pdf

http://hackage.haskell.org/package/logict

This is all using continuations behind the scenes, and related to some
discussions Linas and I had earlier about making Atomspace schema
processing support continuations. If one is manipulating
continuations rather than just functions, then maintaining and
manipulating the external computation graph is "just" an efficient
shorthand for computing what the continuations denote...

-- Ben
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34Jt4H16rGeqyfdwF2kHRX8nSme0zk7im8tuR%2BBpbMnsQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"Listen: This world is the lunatic's sphere, / Don't always agree
it's real. / Even with my feet upon it / And the postman knowing my
door / My address is somewhere else." -- Hafiz

Linas Vepstas

unread,
May 18, 2019, 5:35:14 PM5/18/19
to opencog
Ben,

You really don't need any fancy words for this. The design for this was done last spring, it was done for Alexey's Potapov's team. Its called "Values" and the RandomValue example shows how to use it.  You can do continuations to your hearts-delight under the covers.  There's no technical problems to solve, at this level. The meta-problem is a communication problem -- Alexey's team did not use that design, they invented something brand-new, and presented it as fait-accompli.  I believe that, by now, they've mostly-sort-of-ish returned to something close-ish to the original proposed design (I'm not quite entirely clear on that).

The remaining tasks are as I listed them:
-- develop a dozen of two predicates that ghost can hook into
-- Map the Potapov-team code to those predicates
-- Build "generic" versions of exactly those same predicates, that use OctoMapValue to do their stuff
-- Pipe the Hanson Robotics ROS pipeline into OctoMap
-- Write entertaining robot dialog that shows off those predicates

The difficult theoretical meta-issue might be "is the above a good way of hooking sensory input to natural language?" We can debate that for the next 100 years. But in the meanwhile, the first five points above -- I think they're achievable; I think they're straight-forward -- ordinary software developers can do it; there is no theoretical invention required; don't even need to know what a continuation is. It's mostly just wire-it-up and debug-it.  In the traditional scale of Ben-Goertzel-interstingness, the above is "boring", it is "not even AI".  And that is actually a good thing: it allows a basic working demo to be built, and if we build it and don't let it bit-rot, then some/much of the tension with Hanson would evaporate.  It's a practical, doable sensory system, even if it is not any kind of theoretical lightning bolt.  It's a base-line.

The other meta-question is "who is managing this process?" -- it feels like progress is at a dead stand-still; no one is communicating on any communications channels visible to me.

-- Linas



For more options, visit https://groups.google.com/d/optout.


--

Ben Goertzel

unread,
May 18, 2019, 5:45:11 PM5/18/19
to opencog
***

The other meta-question is "who is managing this process?" -- it feels
like progress is at a dead stand-still; no one is communicating on any
communications channels visible to me.
***

This is because you are not on SingularityNET's internal Slack where a
lot of the current discussion is taking place

If you'd like to join this Slack -- which is private not public at
this point -- let me know and we can re-send you the email for you to
set up your li...@singularitynet.io email address, which you need in
order to access that Slack...

-- Ben
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35Z3bZgf619uXxTn52%2B7SvqA2e8Xpp9CRrNyfzG6VuFZQ%40mail.gmail.com.

Linas Vepstas

unread,
May 18, 2019, 6:01:27 PM5/18/19
to opencog
Ugh.

With great trepidation.

1) It's private, as you point out, not public.

2) with Slack, even if it was public, it is impossible to make any kind of record of what was said, collect it, publish it.  In particular, that means that you cannot use a google/yahoo/bing/duckduck-go to search for those old slack conversations. Which sharply diminishes their value as a communications-medium.  It's like having no long-term memory - you can't look up or find what was said, what was decided, who decided it, what the outcome was. There is no public record, its all dark, cigar-smoke-filled back rooms with no accountability, no records.

3) I find chat, as a style of communication, terribly inefficient and time-wasting. It requires a lot of time, a lot of effort, a lot of weeding through random junk.  I don't have the patience for that. I don't have the attention-span for that.  I want it here, now, instant, on the spot.  Chat is too slow, not dynamic enough. It is ... mind-poundingly boring.

So overall, I'd like to avoid slack. I prefer something faster, quicker, sharper.

-- linas


For more options, visit https://groups.google.com/d/optout.

Vitaly Bogdanov

unread,
May 22, 2019, 12:19:38 PM5/22/19
to opencog
Hi Linas,

It was merged half-a-year-ago, a year-ago.

In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.

Yes, but I think you misunderstand how it works. It is updated only when the value is asked for. If no one asks for the value, it is never updated.  This design allows the current value to be managed externally, in some other component, and it is then brought into the atomspace "on demand", only when some user of the atomspace tries to look at the value.   Think of it as a "smart value" -- when the user asks the value "what number are you holding?", the value isn't holding anything; instead, it goes to the external server, gets the latest, and returns that.

In this case, OctoMapValue never changes, until you ask for it's current value; then it goes to the space-server, gets the current value, and returns that.

Ok, this way it should work. But things which confuses me is that only place where update() is called is withing OctoMapValue.to_string() function.
 
The examples directory contains a much simpler example: "RandomValue" -- which uses `random()` as the "external system".  You're supposed to cut-n-paste that code, replace `random()` by your server, and that's it -- done.  That's all that  OctoMapValue is - just a small, simple wrapper.

Also - right now, OctoMapValue only holds a 3D position, I think. We need an API for size, width, height, orientation.  Maybe not all of that, right away, but at least a basic-size API.  So that natural language expressions like "See that small thing on the table? Point your finger at it" work.

Having these other attributes does not change the overall design. The Values don't update until they are accessed.

Yes, it is clear.

For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.

? I don't understand. What do you need to propagate back?

Our goal is to make system which consist of neural networks and atomspace links which could be trained by error backpropagation. When system answers the question and answer is not correct the error is backpropagated through the computation graph and updates truth values of the atomspace links and neural network weights.
 
As far as I see such unification is not possible without changing predicate logic or introducing outer system to calculate predicates and keep computation graph.

Yes, it's fine to *also* have externally-calculated predicates, and that is what you have, more-or-less.  Having generic predicates that work with the space server would also be nice to have (and then, to have the space server hooked up to ROS, which it isn't, right now) 

Agree, as I wrote before that these approaches do not exclude each other and could be combined.
 
Oh, and finally, of course, to hook up the predicates (generic, or external) to language (ghost). For this last step, we need the ghost maintainers -- Amen, Man Hin, others, to look at think about and comment about what predicates they could actually use and support easily and quickly.   It's easy to make a list of prepositional and comparative phrases -- there are about 100 of them (above, below, behind, next-to, bigger-then, taller, .. etc) and we should start with some reasonable subset of these, hook them into the chatbot, and get things like question-answering going.  As far as I know, no one has begun any of this work yet, right?

 Not sure it is a question to me, but in our work we didn't modify ghost. We used external "language to atomspace query" conversion logic based on relex output to demonstrate the approach.

Vitaly Bogdanov

unread,
May 22, 2019, 4:11:47 PM5/22/19
to opencog
Hi Ben,


It seems analogous to what is done in Haskell to support backtracking
and Prolog-type cut and so forth.  There one does use an "external"
system to manage historical records, but this external system is
hidden behind the monadic interface,

http://okmij.org/ftp/papers/LogicT.pdf

http://hackage.haskell.org/package/logict

This is all using continuations behind the scenes, and related to some
discussions Linas and I had earlier about making Atomspace schema
processing support continuations.   If one is manipulating
continuations rather than just functions, then maintaining and
manipulating the external computation graph is "just" an efficient
shorthand for computing what the continuations denote...

Yes if all calculations was in Scheme then it would be possible, but some value manipulation logic is implemented in bare C++ and this logic is spreaded over atomspace code so we cannot capture it easily. It is the reason why we need passing computation graph from GroundedPredicateNodes through custom Atom wrappers.
 

Linas Vepstas

unread,
May 23, 2019, 3:29:55 AM5/23/19
to opencog

Hi Vitaly,

On Wed, May 22, 2019 at 11:19 AM Vitaly Bogdanov <vsb...@gmail.com> wrote:
Hi Linas,

It was merged half-a-year-ago, a year-ago.

In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.

Yes, but I think you misunderstand how it works. It is updated only when the value is asked for. If no one asks for the value, it is never updated.  This design allows the current value to be managed externally, in some other component, and it is then brought into the atomspace "on demand", only when some user of the atomspace tries to look at the value.   Think of it as a "smart value" -- when the user asks the value "what number are you holding?", the value isn't holding anything; instead, it goes to the external server, gets the latest, and returns that.

In this case, OctoMapValue never changes, until you ask for it's current value; then it goes to the space-server, gets the current value, and returns that.

Ok, this way it should work. But things which confuses me is that only place where update() is called is withing OctoMapValue.to_string() function.

That's because it inherits from FloatValue ... The primary access is provided by the `value()` method -- see FloatValue.h line 59
```
     const std::vector<double>& value() const { update(); return _value; }
```
which works with
```
      virtual void update();
```
The goal here was to allow the compiler to inline the call to `value()`  whereas the `virtual void update()` is unconstrained - it doesn't have to return anything, and it can do anything at all. 


For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.

? I don't understand. What do you need to propagate back?

Our goal is to make system which consist of neural networks and atomspace links which could be trained by error backpropagation. When system answers the question and answer is not correct the error is backpropagated through the computation graph and updates truth values of the atomspace links and neural network weights.

OK. Yes, that seems reasonable.  Note that all TruthValues inherit from FloatValue. None of them overload the virtual `update()` method (they don't need to); but a special one could: e.g. you could create a class TensorFlowTruthValue and overload the `update()` method to compute something on the fly, or fetch the latest values, or whatever.

Mostly, you don't want to update atomspace truth values 100 times a second, cause no one is going to be looking at them very much; such updates are wasted.  You want to avoid doing CPU-intensive work in the main atomspace thread; for a new thread if you have cpu-intensive things. Watch out for lock contention. etc.  The Value system, with the `virtual update()` method was designed to allow cpu-intensive things to live outside of the atomspace.

Oh, and finally, of course, to hook up the predicates (generic, or external) to language (ghost). For this last step, we need the ghost maintainers -- Amen, Man Hin, others, to look at think about and comment about what predicates they could actually use and support easily and quickly.   It's easy to make a list of prepositional and comparative phrases -- there are about 100 of them (above, below, behind, next-to, bigger-then, taller, .. etc) and we should start with some reasonable subset of these, hook them into the chatbot, and get things like question-answering going.  As far as I know, no one has begun any of this work yet, right?

>  Not sure it is a question to me, but in our work we didn't modify ghost. We used external "language to atomspace query" conversion logic based on relex output to demonstrate the approach.

For a demo, I guess that's OK. So, Ghost is a chatbot system designed for Hanson Robotics, and it's primary goal is to integrate sensory and motor subsystems with language.  That is why I keep saying "preposition" over and over again.  I really really really want to have this:

EvaluationLink
     PredicateNode "is-next-to"
     ListLink
          ConceptNode   "Ben"
          ConceptNode   "David"

so that whenever those two are on stage with Sophia, and someone asks "Sophia, do you see Ben standing next to David?" she can honestly say yes, because the EvaluationLink caused the `FloatValaue::update()` method to be called, and updated with the latest from Tensorflow or from ROS+OctoMap or where-ever.

Now, you can certainly hack around with relex, but that's hard and ugly.  Of course, ghost is just a chatbot, its not a sophisticated language processing system.  But it works, and until we get a replacement, I would prefer it if efforts were aimed that way.

Of course, if you want to invent a brand-new language-processing system, well, yes, that would be nice, but that is a different discussion.

Linas Vepstas

unread,
May 23, 2019, 4:09:11 AM5/23/19
to opencog
This sounds like that old conversation coming back to life. Stop using GroundedPredicateNodes. Start using Values. That is what they are there for.  That is what they were created for.     FloatValue was created last summer, specifically for the neural-net/tensorflow project.  Sorry Vitaly -- this happened before you got started with opencog, or just in the first few weeks after you joined -- there were long discussions on this mailing list where it was clear that GroundedPredicateNode would not work, that it provided the wrong interface into tensorflow. If you look at the git commit history, that is when the `update()` method first appeared on FloatValue - it was designed to solve exactly this problem.  (I looked just now. Its pull req #1709 - I diligently avoided saying "tensorflow" anywhere, calling it "streaming" instead. It's from exactly a year ago.)

-- Linas

Linas Vepstas

unread,
May 23, 2019, 4:25:04 AM5/23/19
to opencog

This sounds like that old conversation coming back to life. Stop using GroundedPredicateNodes. Start using Values. That is what they are there for.  That is what they were created for.     FloatValue was created last summer, specifically for the neural-net/tensorflow project. 

I mean, at the time, I was pretty happy with it,  I thought it was really pretty slick.  After that pull req, you could do things like ((3 * vector-neuron A + 42 * vector-neuron B) <= vector-neuron C) all in atomese, all with vectors, and it would just call the `update()` method automatically to get the latest vectors from your tensorflow network.   The wiki page here gives examples: 


Like I say, I thought it was pretty slick; I'm disappointed you didn't use that API.

Linas Vepstas

unread,
May 24, 2019, 12:12:19 AM5/24/19
to opencog, Joel Peter William Pitt
Hi Joel;

I haven't heard back from you.

Some of what I said in this email chain was misleading. So, to fix that, I reviewed and re-wrote the wiki-page https://wiki.opencog.org/w/SpaceServer It should now accurately capture the actual status of things.  If there's anything unclear there,  or seems undoable, or is missing details, let me know.

That page has some pseudo-code on it -- there's still a whole lot of design and engineering and coding that is needed to make it work.  Parts are done-ish, but other parts, where things get confusing, well -- I'm here to help.

Vitaly,

That wiki page now say "oh by the way we can slot-in a neural net into this", without providing any details. I think this should be fixed. So -- do you have any docs or pdfs or wikipages describing your current system? -- Would you care to think about, discuss, or redesign your system to more closely resemble what I wrote there, or propose why it might not work as written, or what some alternatives might be?   I really want to have a common, unified language API that makes it easy for different language subsystems to use your code, and also makes it possible for different perceptions systems to be used. The https://wiki.opencog.org/w/SpaceServer wiki page is my current best guess for what that might be like.

-- Linas

On Tue, May 14, 2019 at 9:29 PM Joel Pitt <joel...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.

Joel Pitt

unread,
May 26, 2019, 7:26:43 PM5/26/19
to Linas Vepstas, opencog
Hi Linas,

Thanks for updating the wiki page. I'll have a read.

As I feel sufficiently rusty on OpenCog, I've just been lurking in this discussion. I should clarify that my question about the SpaceServer was for a particular SingularityNET project, and to me it felt that having a functional Space/Time Server was a logically useful thing for it to be successful.

Even being part of Slack, part of the challenge is trying to get people to communicate about the related efforts. As I understand it, Misgana is still working on it and has something functional. I'm not sure how close that version is to your conceptual design, but I want to avoid writing YASTS (Yet Another Space Time Server) or duplicating work already done.

Once I'm more sure about my priorities I'll have a better idea if I can dedicate time to building/improving the SpaceTime server as you see it.

Joel


Linas Vepstas

unread,
May 26, 2019, 9:21:54 PM5/26/19
to Joel Peter William Pitt, Misgana Bayetta, opencog
Hi Joel, Hi Misgana

Thanks for answering.

On Sun, May 26, 2019 at 6:26 PM Joel Pitt <joel...@gmail.com> wrote:
Hi Linas,

Thanks for updating the wiki page. I'll have a read.

As I feel sufficiently rusty on OpenCog, I've just been lurking in this discussion. I should clarify that my question about the SpaceServer was for a particular SingularityNET project, and to me it felt that having a functional Space/Time Server was a logically useful thing for it to be successful.

Well, in that context, the first question would be "what do you want to do with it?" because that drives all of the design.  If you just want to track positions of things, I believe that ROS has some kick-butt solutions for that: I mean -- people are building self-driving, self-flying you-name-its with ROS and you are not going to beat the sensory infrastructure that has been built up there.

My primary interest is to hook up perception to language and spatial reasoning. For that, one has to be able to access spatial data on the same platform that is doing language -- so, in my case, the AtomSpace.  For language and common-sense reasoning, that means prepositions: is it above? below? bigger? smaller? etc. so I am pushing for that.  (I really really want to be able to say "here, this is opencog actually doing something useful", and talking about perception would be that.)

But if you want to do, -- I dunno -- spatial reasoning, well, wow, that would be a completely different thing, that would be pure AI research, as far as I'm concerned ... but also more-or-less unrelated to opencog... I'm not sure what advantage you'd get from using the atomspace, or other parts of opencog, as opposed to starting with a blank slate.

Even being part of Slack,

I keep asking for Matrix. At least it's open source. At least one can get access.  https://matrix.org/blog/index

part of the challenge is trying to get people to communicate about the related efforts.

Opencog mailing list, IRC and slack have been a ghost-town for years. Well, forever. I don't know why.  I don't have the personality to go around and poll everyone about what they are doing.

As I understand it, Misgana is still working on it and has something functional.

!?  Misgana, are you working on something? What? Were?  I rejected your (Misgana's) last pull request, because it was a re-invention of the code you had written earlier... and your earlier code was already merged (and actually, it worked better)

I'm not sure how close that version is to your conceptual design, but I want to avoid writing YASTS (Yet Another Space Time Server) or duplicating work already done.

I think the wiki page spells out everything that has and has not been done.

Once I'm more sure about my priorities I'll have a better idea if I can dedicate time to building/improving the SpaceTime server as you see it.

Well, that would be cool.  I think that what I wrote in the wiki page consists of bite-size, achievable steps that would be a good warmup for getting back into opencog.  Where 'bite-size" is a relative concept.

-- Linas

Joel Pitt

unread,
May 26, 2019, 10:35:02 PM5/26/19
to opencog, Misgana Bayetta
On Mon, 27 May 2019 at 13:21, Linas Vepstas <linasv...@gmail.com> wrote:
Well, in that context, the first question would be "what do you want to do with it?" because that drives all of the design.  If you just want to track positions of things, I believe that ROS has some kick-butt solutions for that: I mean -- people are building self-driving, self-flying you-name-its with ROS and you are not going to beat the sensory infrastructure that has been built up there.

My primary interest is to hook up perception to language and spatial reasoning. For that, one has to be able to access spatial data on the same platform that is doing language -- so, in my case, the AtomSpace.  For language and common-sense reasoning, that means prepositions: is it above? below? bigger? smaller? etc. so I am pushing for that.  (I really really want to be able to say "here, this is opencog actually doing something useful", and talking about perception would be that.)
 
The project is also language-related and already uses OpenCog. Basically a chat-bot.
 
As I understand it, Misgana is still working on it and has something functional.

!?  Misgana, are you working on something? What? Were?  I rejected your (Misgana's) last pull request, because it was a re-invention of the code you had written earlier... and your earlier code was already merged (and actually, it worked better)

Sorry... I meant Misgana is "working on it" in that it was their responsibility recently, before being pulled onto another project. And since there is unmerged code (probably due to the rejected PR), I presumed it was a work in progress that would be resumed at some stage.
Reply all
Reply to author
Forward
0 new messages