Hi all,I may be jumping into some OpenCog development, and I'm keen to know the current status of SpaceServer. I know the code in the opencog repo is deprecated,
I also see there is a semi-recent branch from Misgana here: https://github.com/misgeatgit/opencog/tree/time-ocmap
Any other clues on people/links I should contact or be aware of?
They implemented something completely different and totally incompatible. Actually, I'm not even sure any more about what it is they built. It seems like a tragic mistake, because the net result is a system that is incompatible with .. everything else. I really really want to get back to the core idea of using PredicateNodes for everything, and hiding all neural-net magic under the PredicateNodes, and not somewhere else (certainly not in python/C++/scheme code API's) How to rescue that effort, and get it to work with ghost, well, that is a different conversation. If we could just have the basic PredicateNode api working -- this would be future-proof, and extendabale and I think its just not that hard to do. So yes, please please do it!
They implemented something completely different and totally incompatible. Actually, I'm not even sure any more about what it is they built. It seems like a tragic mistake, because the net result is a system that is incompatible with .. everything else. I really really want to get back to the core idea of using PredicateNodes for everything, and hiding all neural-net magic under the PredicateNodes, and not somewhere else (certainly not in python/C++/scheme code API's) How to rescue that effort, and get it to work with ghost, well, that is a different conversation. If we could just have the basic PredicateNode api working -- this would be future-proof, and extendabale and I think its just not that hard to do. So yes, please please do it!Some explanation to clarify the difference.
In first case system updates coordinates of the objects constantly. It also constantly analyses the scene and keeps in mind that some objects like "cube" is here and that it has "red" color.
When system needs answer question "What is on the left of the red cube?" it queries the atomspace and calculates predicate which finds "red cube" and calculates "left-of" predicate using coordinates of the cube and other objects on scene.
In second case system doesn't need constantly adding all attributes of all objects on scene to the atomspace to be ready processing queries.
Instead it uses GroundedSchemaNode or GroundedPredicateNode to calculate predicates "is-red" and "left-of" on the fly using visual features of objects it sees. And error is backpropagated through the GroundedSchemaNode and GroundedPredicateNode to improve predicate accuracy.
Both ways of request processing do not exclude each other. Main difference is that in second case you don't need to pre-calculate all attributes you may need to answer the question.
The intended design is that there is a well-known Value that is attached to red-cube. That Value is knows how to obtain the correct 3D location from the SpaceServer (or some other server, e.g. your server). That Value ONLY fetches the location when it is asked for it, and ONLY THEN. This is the code that Misgana wrote. Twice. You should NEVER directly shove 3D positions into the atomspace!!
That way, we can have a common, uniform API for all visual-processing and 3D spatial reasoning systems. (for example, some 3D spatial reasoning might be done on imaginary, pretend objects. The sizes, colors, locations of those objects will not come from your vision server; they will come in through some other subsystem. However, the API will be the same.I should be able to tell the robot "Imagine that there is a six-foot red cube directly in front of Joel. Would Joel still be visible?" and get the right answer to that. When the robot imagines this, it would not use the space-server, or your server, or the ROS-SLAM code for that imagination. However, since the access style/access API is effectively the same, it can still perform that spatial reasoning and get correct answers.
Hi Linas,The intended design is that there is a well-known Value that is attached to red-cube. That Value is knows how to obtain the correct 3D location from the SpaceServer (or some other server, e.g. your server). That Value ONLY fetches the location when it is asked for it, and ONLY THEN. This is the code that Misgana wrote. Twice. You should NEVER directly shove 3D positions into the atomspace!!Am I right that implementation of this design is still not merged?
In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.
That way, we can have a common, uniform API for all visual-processing and 3D spatial reasoning systems. (for example, some 3D spatial reasoning might be done on imaginary, pretend objects. The sizes, colors, locations of those objects will not come from your vision server; they will come in through some other subsystem. However, the API will be the same.I should be able to tell the robot "Imagine that there is a six-foot red cube directly in front of Joel. Would Joel still be visible?" and get the right answer to that. When the robot imagines this, it would not use the space-server, or your server, or the ROS-SLAM code for that imagination. However, since the access style/access API is effectively the same, it can still perform that spatial reasoning and get correct answers.Yes, I think I see your point. We could wrap NN by value and return its results on demand to be analyzed by generic predicates. Such approach allows getting results from visual features on demand. But back propagation seems to not work with this approach.
For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.
As far as I see such unification is not possible without changing predicate logic or introducing outer system to calculate predicates and keep computation graph.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACYTDBf5by0X3rpiPu1zy0O-GOhm07uLOOH07Lnp6F%3DXw31t%3DQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACYTDBcGjhmRp2mLZMJhRUfXAg%3DSan9k4xB8aysL2%3DZGqnDK%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
It was merged half-a-year-ago, a year-ago.In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.Yes, but I think you misunderstand how it works. It is updated only when the value is asked for. If no one asks for the value, it is never updated. This design allows the current value to be managed externally, in some other component, and it is then brought into the atomspace "on demand", only when some user of the atomspace tries to look at the value. Think of it as a "smart value" -- when the user asks the value "what number are you holding?", the value isn't holding anything; instead, it goes to the external server, gets the latest, and returns that.In this case, OctoMapValue never changes, until you ask for it's current value; then it goes to the space-server, gets the current value, and returns that.
The examples directory contains a much simpler example: "RandomValue" -- which uses `random()` as the "external system". You're supposed to cut-n-paste that code, replace `random()` by your server, and that's it -- done. That's all that OctoMapValue is - just a small, simple wrapper.
Also - right now, OctoMapValue only holds a 3D position, I think. We need an API for size, width, height, orientation. Maybe not all of that, right away, but at least a basic-size API. So that natural language expressions like "See that small thing on the table? Point your finger at it" work.Having these other attributes does not change the overall design. The Values don't update until they are accessed.
For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.? I don't understand. What do you need to propagate back?
As far as I see such unification is not possible without changing predicate logic or introducing outer system to calculate predicates and keep computation graph.
Yes, it's fine to *also* have externally-calculated predicates, and that is what you have, more-or-less. Having generic predicates that work with the space server would also be nice to have (and then, to have the space server hooked up to ROS, which it isn't, right now)
Oh, and finally, of course, to hook up the predicates (generic, or external) to language (ghost). For this last step, we need the ghost maintainers -- Amen, Man Hin, others, to look at think about and comment about what predicates they could actually use and support easily and quickly. It's easy to make a list of prepositional and comparative phrases -- there are about 100 of them (above, below, behind, next-to, bigger-then, taller, .. etc) and we should start with some reasonable subset of these, hook them into the chatbot, and get things like question-answering going. As far as I know, no one has begun any of this work yet, right?
It seems analogous to what is done in Haskell to support backtracking
and Prolog-type cut and so forth. There one does use an "external"
system to manage historical records, but this external system is
hidden behind the monadic interface,
http://okmij.org/ftp/papers/LogicT.pdf
http://hackage.haskell.org/package/logict
This is all using continuations behind the scenes, and related to some
discussions Linas and I had earlier about making Atomspace schema
processing support continuations. If one is manipulating
continuations rather than just functions, then maintaining and
manipulating the external computation graph is "just" an efficient
shorthand for computing what the continuations denote...
Hi Linas,It was merged half-a-year-ago, a year-ago.In current atomspace/opencog code I can find OctoMapValue only which inherits FloatValue (and keeps vector of doubles in it) + has update() method which updates the value using OctoMapNode. So value should be updated before using it.Yes, but I think you misunderstand how it works. It is updated only when the value is asked for. If no one asks for the value, it is never updated. This design allows the current value to be managed externally, in some other component, and it is then brought into the atomspace "on demand", only when some user of the atomspace tries to look at the value. Think of it as a "smart value" -- when the user asks the value "what number are you holding?", the value isn't holding anything; instead, it goes to the external server, gets the latest, and returns that.In this case, OctoMapValue never changes, until you ask for it's current value; then it goes to the space-server, gets the current value, and returns that.Ok, this way it should work. But things which confuses me is that only place where update() is called is withing OctoMapValue.to_string() function.
For example if generic predicate uses value to verify whether object is "red", then what it expects from value is pair of doubles (mean, confidence). If our value returns such pair of doubles it forgets all computation graph which led to this result. It means that we are not able to propagate an error back through the calculation graph and improve next result.? I don't understand. What do you need to propagate back?Our goal is to make system which consist of neural networks and atomspace links which could be trained by error backpropagation. When system answers the question and answer is not correct the error is backpropagated through the computation graph and updates truth values of the atomspace links and neural network weights.
Oh, and finally, of course, to hook up the predicates (generic, or external) to language (ghost). For this last step, we need the ghost maintainers -- Amen, Man Hin, others, to look at think about and comment about what predicates they could actually use and support easily and quickly. It's easy to make a list of prepositional and comparative phrases -- there are about 100 of them (above, below, behind, next-to, bigger-then, taller, .. etc) and we should start with some reasonable subset of these, hook them into the chatbot, and get things like question-answering going. As far as I know, no one has begun any of this work yet, right?
This sounds like that old conversation coming back to life. Stop using GroundedPredicateNodes. Start using Values. That is what they are there for. That is what they were created for. FloatValue was created last summer, specifically for the neural-net/tensorflow project.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB6WdMR%3Df4WQt1LmK_u7SbgXM_BksVsA395KUERgfUYkkee%2BuQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Hi Linas,Thanks for updating the wiki page. I'll have a read.
As I feel sufficiently rusty on OpenCog, I've just been lurking in this discussion. I should clarify that my question about the SpaceServer was for a particular SingularityNET project, and to me it felt that having a functional Space/Time Server was a logically useful thing for it to be successful.
Even being part of Slack,
part of the challenge is trying to get people to communicate about the related efforts.
As I understand it, Misgana is still working on it and has something functional.
I'm not sure how close that version is to your conceptual design, but I want to avoid writing YASTS (Yet Another Space Time Server) or duplicating work already done.
Once I'm more sure about my priorities I'll have a better idea if I can dedicate time to building/improving the SpaceTime server as you see it.
Well, in that context, the first question would be "what do you want to do with it?" because that drives all of the design. If you just want to track positions of things, I believe that ROS has some kick-butt solutions for that: I mean -- people are building self-driving, self-flying you-name-its with ROS and you are not going to beat the sensory infrastructure that has been built up there.My primary interest is to hook up perception to language and spatial reasoning. For that, one has to be able to access spatial data on the same platform that is doing language -- so, in my case, the AtomSpace. For language and common-sense reasoning, that means prepositions: is it above? below? bigger? smaller? etc. so I am pushing for that. (I really really want to be able to say "here, this is opencog actually doing something useful", and talking about perception would be that.)
As I understand it, Misgana is still working on it and has something functional.!? Misgana, are you working on something? What? Were? I rejected your (Misgana's) last pull request, because it was a re-invention of the code you had written earlier... and your earlier code was already merged (and actually, it worked better)