--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/8dd1cfa3-0e8a-4d85-ba2a-6b38fcd2aea7n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com.
" There is little/no actual "AGI" in that code base."Regarding AGI, I am aware of the distance that separates us from the goal. But in my simplistic view (don't kill me for this phrase), human intelligence is just an excellent inference on an excellent knowledge base maintained by excellent learning. Excellence is a matter of progress.
This is to say that I think you are closer to AGI than it looks.
- In reply to Nil: (on Slack i'm named Raschild)
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.Two quick questions:1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?
2) Are Values already usable instead of OctoMap and SpaceTime server?
I started with the reasoning: I am currently learning the inference rules and how they work with the atomspace, I have seen part of the examples in ure and pln and I was trying to understand the blocksworld problem developed by Anatoly Belikov here:
* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.
Oh, please let me kill you! That's where all the fun is! Based on discussions with many people, there is a wide-spread misunderstanding of what AGI is or how it might be achieved. Although what you said is superficially, simplistically correct, I want to point out that "excellence" cannot be achieved by hand-crafting knowledge bases. Very few people seem to understand this, and seem to believe that somehow just slapping a bunch of parts together will result in AGI. That designing AGI is like designing an airplane, that it's just a matter of "excellent design" and it will fly by itself. This is not the case.Thus, I was trying to be careful in distinguishing the "scaffolding", which is hand-crafted, from actual AGI type work. The scaffolding is needed to bring data into a format where an AGI type system can interface with it. At every point of design, you have to ask: is this piece of code just some more hand-crafted (human-crafted) special-case code that is being used to convert the external world into a form that a computer algorithm can interact with? Or is this piece of code "AGI" (or as close to AGI as we can get right now)? So I am trying to draw a contrast between "those things that are AGI" and "ancillary support services".
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.Two quick questions:1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?I have only used ROS. The design is straight-forward. If a ROS event comes in (some face is perceived; there is some loud noise, other environmental change) there is a python snippet (ROS is easiest to use with python) that converts that event into Atomese, and sends that Atomese to the cogserver (the cogserver is a network server, nothing more). So for example, a loud sound might be converted to `(StateLink (PredicateNode "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, processing does whatever you've set it up to do with this kind of information. Exactly how sophisticated you want to be is up to you.For output, it's even easier: `(cog-evaluate! (EvaluationLink (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments ...))))` which calls a python function "twiddle_ROS_message" to send some data somewhere in ROS.My remarks about "excellent design" and "AGI" above means that python wrappers for converting ROS data to Atomese should be minimal, or that they should do just enough to bring in external information into the AtomSpace. You want to avoid a game of writing large, complex python scripts. So when you ask "How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?" The answer should be "about the same" and "not complicated" because there should be only minimalistic shims to convert to/from Atomese and the message formats these other systems use. If you are creating something complicated in these systems, you are not doing AGI, you are doing robotics.
Again: scaffolding vs AGI. So, 3D location is part of the external world, and the scaffolding must interface to the external world, and take 3D data and convert it into a format that the AGI code can operate on. If you have AGI code that can work directly with 3D point clouds, then great! No scaffolding is needed! If you (like me) have proto-AGI code that wants to work with symbolic-natural-language, then some scaffolding is needed to convert point-clouds into prepositions. Some day in the future, maybe we can remove some of the scaffolding.However, up until now, almost all work that has been done, that is being done, is on scaffolding. If you are not careful, you will find yourself doing the same. This is not bad: it's educational, and it's important, and it helps show where the boundary is between the scaffolding and the AGI. -- if nothing else, this is called "learning at the school of hard knocks" -- "I built one and it didn't work, but I learned something". At the forefront of knowledge, that's the only school that is open. That's what science is.
Reasoning and inference is a very dangerous place to start, and may kill your project before it even gets started. There are several reasons for this.
* Reasoning presumes that you have already decided on a representation for your data (either hand-crafted it, or automatically learned, somehow.) Once you have this representation, then you can reason on it. But do you have this representation? No, you don't. You might borrow one from blocks-world, or borrow the one from Eva, or borrow the one from rocca (or the one from agi-bio, which represents DNA, RNA and proteins). You then have the problem of pulling external data and placing it into your representation, where "external data" is vision, sound, text, or RNA/DNA genetic sequences. This is scaffolding.
* Reasoning presumes that you have inference rules. Where did these come from? Did you hand-craft them? PLN has a bunch of hand-crafted inference rules that Ben and friends hand-crafted 10-15 years ago, and Nil has carefully implemented in C code. They work, kind-of, whenever you have a hand-crafted representation for your data that is PLN-compatible. Nil spends a lot of time, a huge amount of time (the last 10 years) getting the hand-crafted rules to fit with the hand-crafted representation, and to get reasoning working efficiently and quickly. But if your representation does not fit the PLN structure, then it won't work. (None of my language work was ever able to fit with PLN. My new AGI work (at opencog/learn) will almost surely not fit with PLN; the goal there is to learn brand-new inference rules, instead of using the hand-crafted ones.)* The actual implementation of the URE is "hard-core comp-sci" or maybe "good old-fashioned comp sci": its a set of algorithms to apply some rewrite rules to a network. There are many non-opencog systems that do something similar, such as SAT-solvers, constraint satisfaction systems, ASP-answer-set programming, the "lambda cube", higher-order logic, theorem-proving systems, etc. It's hard core, it's not easy. Many of these systems are much much faster, and are much more flexible, *if* your data representation is not PLN, but is something else: e.g. boolean expressions or prolog-like assertions. So we are back again to "what is your internal model"?For example, in robotics, for a robot inside an office building, a common inference task is "is the door open? If the door is open then roll through it, else grasp the door handle and open the door." The standard grad-school robotics approach to solve this is to use ROS or something similar to "see" the door, and then to use ASP (answer-set programming) to perform very fast crisp-logic reasoning and inference. It works. It's what 90% of all university robotics departments use. It is reasoning and inference. It's not AGI.
* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.For Eva, the self-model and world-model are all part of the same thing, and they were hand-crafted (not learned). The goal was to interface language to movement and perception. The inspiration was to use concepts and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.Getting this to work involved a sequence of rickety and fragile transformations: from sound to text (via google voice-to-text) which is inaccurate. From text to a parse-tree (via link-grammar). From parse-tree to the internal model. From the internal model to robot motion/action. Changing anything anywhere was both conceptually hard (no one else understood what the heck I was doing, including, among others, "the management" (Ben and David) and without management support, the going gets tough.) Also, it was abstract enough and complex enough that other programmers were unwilling to learn how it worked, and so were unwilling to help. If you personally want to work on this, then be aware that it is abstract and complex. And fragile. (Part of the goal of "good engineering" is to compartmentalize the complexity so that it becomes "easy to use" and non-fragile. This code bases needed a little bit more "good engineering" than it ever got.)My goal with the opencog/learn project is to automate all of the above, including the reasoning, inference, and world-model, but it is far away from that, so far. I think I know how to do these things, but now I have to ... do them.
-- Linas
Just to throw in my 3 cents here, I have done some work on moving ROCCA to use the MineRL’s Gym API to access Minecraft, instead of using a separate Malmo wrapper.It works in the sense that I can run the example code in that way and it simplifies the whole design somewhat, as interfacing with Minecraft is reduced to relying on a familiar structure of a Gym API.I also have a setup where I can run the whole thing in a Docker container, so it’s almost reproducible (there is a minor file edit needed).An idea that I wrote about on Discord was to bring in some unsupervised image segmentator (like MONet), train it on the MineRL dataset (they have a dataset of Minecraft traces) and then use that to have someinformation about visual objects available to the agent. For now I am stuck a bit though, as sadly the loading code they provided for their dataset has memory leak issues, so I will have to write my own, it shouldn’t prove too difficult asI just need the video frames but I have to get around to implement it yet.
ok, lot of things here!I'm trying to learn more than 10 years of your work in few month.
Il giorno martedì 23 marzo 2021 alle 18:10:24 UTC+1 linas ha scritto:Oh, please let me kill you! That's where all the fun is! Based on discussions with many people, there is a wide-spread misunderstanding of what AGI is or how it might be achieved. Although what you said is superficially, simplistically correct, I want to point out that "excellence" cannot be achieved by hand-crafting knowledge bases. Very few people seem to understand this, and seem to believe that somehow just slapping a bunch of parts together will result in AGI. That designing AGI is like designing an airplane, that it's just a matter of "excellent design" and it will fly by itself. This is not the case.Thus, I was trying to be careful in distinguishing the "scaffolding", which is hand-crafted, from actual AGI type work. The scaffolding is needed to bring data into a format where an AGI type system can interface with it. At every point of design, you have to ask: is this piece of code just some more hand-crafted (human-crafted) special-case code that is being used to convert the external world into a form that a computer algorithm can interact with? Or is this piece of code "AGI" (or as close to AGI as we can get right now)? So I am trying to draw a contrast between "those things that are AGI" and "ancillary support services".I'm starting to understand what you mean.Probably all the code that i'm thinking is "scaffolding". So, "bring data into a format where an AGI type system can interface with it".Maybe it's not clear to me what an AGI-code is. I had seen /learn and /generate only from readme, maybe because i found them hard at that time (and at this time, for sure).I agree that hand-crafting KB and "execellence" don't mix. But so, your proposal would be to achieved "execellence" by a knowledge base built by what?
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.Two quick questions:1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?I have only used ROS. The design is straight-forward. If a ROS event comes in (some face is perceived; there is some loud noise, other environmental change) there is a python snippet (ROS is easiest to use with python) that converts that event into Atomese, and sends that Atomese to the cogserver (the cogserver is a network server, nothing more). So for example, a loud sound might be converted to `(StateLink (PredicateNode "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, processing does whatever you've set it up to do with this kind of information. Exactly how sophisticated you want to be is up to you.For output, it's even easier: `(cog-evaluate! (EvaluationLink (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments ...))))` which calls a python function "twiddle_ROS_message" to send some data somewhere in ROS.My remarks about "excellent design" and "AGI" above means that python wrappers for converting ROS data to Atomese should be minimal, or that they should do just enough to bring in external information into the AtomSpace. You want to avoid a game of writing large, complex python scripts. So when you ask "How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?" The answer should be "about the same" and "not complicated" because there should be only minimalistic shims to convert to/from Atomese and the message formats these other systems use. If you are creating something complicated in these systems, you are not doing AGI, you are doing robotics.I saw the Python wrapper from ROS to Atomose (i used ROS with c++ in a Robotic course and yes, python is simpler) in the Eva forlder and are really minimal.
But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works?
Again: scaffolding vs AGI. So, 3D location is part of the external world, and the scaffolding must interface to the external world, and take 3D data and convert it into a format that the AGI code can operate on. If you have AGI code that can work directly with 3D point clouds, then great! No scaffolding is needed! If you (like me) have proto-AGI code that wants to work with symbolic-natural-language, then some scaffolding is needed to convert point-clouds into prepositions. Some day in the future, maybe we can remove some of the scaffolding.However, up until now, almost all work that has been done, that is being done, is on scaffolding. If you are not careful, you will find yourself doing the same. This is not bad: it's educational, and it's important, and it helps show where the boundary is between the scaffolding and the AGI. -- if nothing else, this is called "learning at the school of hard knocks" -- "I built one and it didn't work, but I learned something". At the forefront of knowledge, that's the only school that is open. That's what science is.Ideally, is there an AGI code idea that works directly with pointcloud 3D?
I also suppose that working with symbolic-natural-language and so propositions is more efficient! Point clouds are heavy and it takes a lot of work to extract information,
so why would we want this?
Reasoning and inference is a very dangerous place to start, and may kill your project before it even gets started. There are several reasons for this.I'm feeling it!* Reasoning presumes that you have already decided on a representation for your data (either hand-crafted it, or automatically learned, somehow.) Once you have this representation, then you can reason on it. But do you have this representation? No, you don't. You might borrow one from blocks-world, or borrow the one from Eva, or borrow the one from rocca (or the one from agi-bio, which represents DNA, RNA and proteins). You then have the problem of pulling external data and placing it into your representation, where "external data" is vision, sound, text, or RNA/DNA genetic sequences. This is scaffolding.* Reasoning presumes that you have inference rules. Where did these come from? Did you hand-craft them? PLN has a bunch of hand-crafted inference rules that Ben and friends hand-crafted 10-15 years ago, and Nil has carefully implemented in C code. They work, kind-of, whenever you have a hand-crafted representation for your data that is PLN-compatible. Nil spends a lot of time, a huge amount of time (the last 10 years) getting the hand-crafted rules to fit with the hand-crafted representation, and to get reasoning working efficiently and quickly. But if your representation does not fit the PLN structure, then it won't work. (None of my language work was ever able to fit with PLN. My new AGI work (at opencog/learn) will almost surely not fit with PLN; the goal there is to learn brand-new inference rules, instead of using the hand-crafted ones.)* The actual implementation of the URE is "hard-core comp-sci" or maybe "good old-fashioned comp sci": its a set of algorithms to apply some rewrite rules to a network. There are many non-opencog systems that do something similar, such as SAT-solvers, constraint satisfaction systems, ASP-answer-set programming, the "lambda cube", higher-order logic, theorem-proving systems, etc. It's hard core, it's not easy. Many of these systems are much much faster, and are much more flexible, *if* your data representation is not PLN, but is something else: e.g. boolean expressions or prolog-like assertions. So we are back again to "what is your internal model"?For example, in robotics, for a robot inside an office building, a common inference task is "is the door open? If the door is open then roll through it, else grasp the door handle and open the door." The standard grad-school robotics approach to solve this is to use ROS or something similar to "see" the door, and then to use ASP (answer-set programming) to perform very fast crisp-logic reasoning and inference. It works. It's what 90% of all university robotics departments use. It is reasoning and inference. It's not AGI.I don't think the following is exactly a representation of the data, but...I thought I was starting with a trivial representation,objects are described by (ConceptNode "English-object-name"),primitive robot actions by GroundedPredicateNode which call py functions that actually perform those actions via ROS.A vision algorithm recognizes certain objects and returns the English name and their 3D coordinates.The robot receives goals to complete via English sentences with Relex2Logic. Once the inference rules are written, the robot tries to solve the goals. When it doesn't know what to do, it tries randomly and ramps up its KB from the sensors and continues to make inferences.The following is an example in a very Pseudo-language.
That's what my mind thinks when planning the resolution of this problem.
It certainly has many wrong ideas, concepts, ways of doing and dealing with things...
What are the critical errors that I've made?What are the main differences from Eva?
Concepts: "name" - "3D pose"- bottle - Na- table - Na(Predicate: "over" List ("bottle") ("table"))Actions:- Go random- Go to coord- Grab objGoal: (bottle in hand) // = grab bottleInference rules: all the necessary rules, i.e.* grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj in hand) ...* coord-rule: if x is in "coord1" and y is over x then y is in "coord1"-> So, robot try backward chaining to find the behavior tree to run. It doesn't find it, it lacks knowledge, it doesn't know where the bottle is (let's leave out partial trees).-> Go random ...-> Vision sensor recognizes table-> atomspace update: table in coord (1,1,1)-> forward chaining -> bottle in coord (1,1,1)-> backward chaining finds a tree, that isGo to coord (1,1,1) + Grap obj-> goal achieved
* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.For Eva, the self-model and world-model are all part of the same thing, and they were hand-crafted (not learned). The goal was to interface language to movement and perception. The inspiration was to use concepts and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.Getting this to work involved a sequence of rickety and fragile transformations: from sound to text (via google voice-to-text) which is inaccurate. From text to a parse-tree (via link-grammar). From parse-tree to the internal model. From the internal model to robot motion/action. Changing anything anywhere was both conceptually hard (no one else understood what the heck I was doing, including, among others, "the management" (Ben and David) and without management support, the going gets tough.) Also, it was abstract enough and complex enough that other programmers were unwilling to learn how it worked, and so were unwilling to help. If you personally want to work on this, then be aware that it is abstract and complex. And fragile. (Part of the goal of "good engineering" is to compartmentalize the complexity so that it becomes "easy to use" and non-fragile. This code bases needed a little bit more "good engineering" than it ever got.)My goal with the opencog/learn project is to automate all of the above, including the reasoning, inference, and world-model, but it is far away from that, so far. I think I know how to do these things, but now I have to ... do them.
-- LinasI haven't looked at the Meaning-Text Theory yet (serious I think!) I'll fix it!What I have described has precisely this direction it seems to me, but it was still only an idea. I can still change, I will have to speak to my supervisors to also evaluate the new possibilities you have shown me!In the meantime, thanks to everyone, my knowledge base is improving a lot too!
Michele
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2c33b2f2-c02d-486f-bf58-4122ed11b73dn%40googlegroups.com.
So, naively, simplistically, a "true AGI system" should be capable of learning the "knowledge base", instead of relying on humans to craft one.Now comes the blurry parts: the learning algorithm itself is hand-crafted, so isn't that a form of cheating? We are once again relying on humans to do the work. For example, for neural nets, effectively all of them are trained on a selection of images curated by human beings. The neural net learns how to recognize a photo of a horse, but it was trained on a human-curated training set. So, again, that's "cheating". Have you heard the expression "its turtles all the way down"? Well, for neural nets, its hand-crafted datasets all the way down. It's people all the way down. The goal of building a true AGI is to avoid this. Step 1 is to avoid hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms. I'm working on Step 1. I suppose that Step 2 is beyond the abilities of what can be done today. It's a bit blurry.
But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works?It is not automatic. You have to `cog-execute!` whatever code you want to trigger. There are several ways of doing this.1) by hand .. obviously.2) write some sheme or some python code that loops over whatever needs to be looped over, searching for high or low STI or any other Value or StateLink or whatever might be changing, and call cog-execute! as needed.3) Do the above, but entirely in Atomese. There is a way to do an infinite loop in atomese -- it is actually a tail-recursive call to the function itself. It should be possible to do everything you need to do in "pure atomese". Now, atomese was never meant to be a full-scale programming language, like python or scheme, so it is missing many commonplace ideas that make python/scheme/c++/etc. "human friendly". But it does have enough to make most things possible, and many things "easy" (-ish)The Eva/Sophia code did version 3. I forget where the main loop is; its only 3 lines of code total, so its easy to miss. It might be in one of the repos that was moved around. Everything else was controlled by SequentialAndLinks, which stepped through a tree of decisions, triggering a GroundedPredicate whenever some condition was met.
There were three design goals:a) Make sure atomese had everything it needed to control a robotb) Make sure that the atomese was simple enough that other algorithms could analyze it and modify it. For example, it should be possible (in principle) for MOSES or URE or PLN or some other system to analyze and modify the robot-control code. (in practice, this was never done)Keeping the robot code in the form of a decision tree should mean that it is simple enough that other systems could analyze that tree, edit that tree, modify it, extend it, and thus create brand-new robot behaviors out of "thin air".c) Make sure that the design of atomese itself was simple enough and usable enough to allow a) and b) above. This is an ongoing project.
This can be solved by carefully hand-crafting a chatbot dialog tree. (The ghost chatbot system in opencog was designed to allow such dialog trees to be created) Over the decades, many chatbots have been written. Again: there are common problems:-- the text is hard-coded, and not linguistic. Minor changes in wording cause the chatbot to get confused.-- there is no world-model, or it is ad hoc and scattered over many places-- no ability to perform reasoning-- no memory of the dialog ("what were we talking about?" - well, chatbots do have a one-word "topic" variable, so the chatbot can answer "we are talking about baseball", but that's it. There is no "world model" of the conversation, and no "world model" of who the conversation was with ("On Sunday, I talked to John about a bottle on a table and how to grasp it")Note that ghost has all of the above problems. It's not linguistic, it has no world-model, it has no defined representation that can be reasoned over, and it has no memory.20 years ago, it was hard to build a robot that could grasp a bottle. It was hard to create a good chatbot.What is the state of the art, today? Well, Tesla has self-driving cars, and Amazon and Apple have chatbots that are very sophisticated. There is no open source for any of this, and there are no open standards, so if you are a university grad student (or a university professor) it is still very very hard to build a robot that can grasp a bottle, or a robot that you can talk to. And yet, these basic tasks have become "engineering"; they are no longer "science". The science resides at a more abstract level.--linas
On 3/24/21 3:53 PM, Michele Thiella wrote:
> My spoken English is not the best, but it will be a way to improve that too.
No problem, I'll adjust, by switching to a terrible French accent. :-)
> Unfortunately, this week I'm busy with other university things, could it
> be one of the next Fridays?
Sure, Fri 2 April then, I'm available from 8am to 3pm EET, then from 5pm
to the end of the night.
Let me know your timing and we can meet there
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/a2120141-b415-41f7-8b29-c7cba23db310n%40googlegroups.com.
Hi Linas,
I hope I'm not disturbing and wasting too much of your time!
Il giorno mercoledì 24 marzo 2021 alle 20:11:49 UTC+1 linas ha scritto:So, naively, simplistically, a "true AGI system" should be capable of learning the "knowledge base", instead of relying on humans to craft one.Now comes the blurry parts: the learning algorithm itself is hand-crafted, so isn't that a form of cheating? We are once again relying on humans to do the work. For example, for neural nets, effectively all of them are trained on a selection of images curated by human beings. The neural net learns how to recognize a photo of a horse, but it was trained on a human-curated training set. So, again, that's "cheating". Have you heard the expression "its turtles all the way down"? Well, for neural nets, its hand-crafted datasets all the way down. It's people all the way down. The goal of building a true AGI is to avoid this. Step 1 is to avoid hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms. I'm working on Step 1. I suppose that Step 2 is beyond the abilities of what can be done today. It's a bit blurry.I begin to struggle not to say nonsense.Thus, if i understand correctly your point 1, considering the README of the repo learn, the small idea is unsupervised learning for natural language; with the next goal of extending the domain from text to "all things in the world".
So, (still using unsupervised learning?) let the robot build its representation of the world through its observations.
Now, the input set is no longer hand-crafted but the algorithm still has the "turtles problem" .. Your point 2 would solve that too, right?The only option I can think of is that AGI itself writes its algorithms .. but recursively AGI would have to invent itself
But then, without a "true-AGI" learning, I'll never have a "true-AGI" knowledge base and without that I'll not be able to continue, right?
Why work for point 1 if point 2 is a prerequisite?
It seems like a no-win situation. Maybe I'm just a pessimist!
There will be another way... In the end, our knowledge base was also helped by our parents in some way.
But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works?It is not automatic. You have to `cog-execute!` whatever code you want to trigger. There are several ways of doing this.1) by hand .. obviously.2) write some sheme or some python code that loops over whatever needs to be looped over, searching for high or low STI or any other Value or StateLink or whatever might be changing, and call cog-execute! as needed.3) Do the above, but entirely in Atomese. There is a way to do an infinite loop in atomese -- it is actually a tail-recursive call to the function itself. It should be possible to do everything you need to do in "pure atomese". Now, atomese was never meant to be a full-scale programming language, like python or scheme, so it is missing many commonplace ideas that make python/scheme/c++/etc. "human friendly". But it does have enough to make most things possible, and many things "easy" (-ish)The Eva/Sophia code did version 3. I forget where the main loop is; its only 3 lines of code total, so its easy to miss. It might be in one of the repos that was moved around. Everything else was controlled by SequentialAndLinks, which stepped through a tree of decisions, triggering a GroundedPredicate whenever some condition was met.There were three design goals:a) Make sure atomese had everything it needed to control a robotb) Make sure that the atomese was simple enough that other algorithms could analyze it and modify it. For example, it should be possible (in principle) for MOSES or URE or PLN or some other system to analyze and modify the robot-control code. (in practice, this was never done)Keeping the robot code in the form of a decision tree should mean that it is simple enough that other systems could analyze that tree, edit that tree, modify it, extend it, and thus create brand-new robot behaviors out of "thin air".c) Make sure that the design of atomese itself was simple enough and usable enough to allow a) and b) above. This is an ongoing project.Ah ok now it makes a lot more sense! I really like solution 3).
I'm experimenting a little with the potential of Atomese (sometimes at random) but it's nice to write, I'm also learning scheme that I didn't know anything about.
Can I ask you to say something about tree of decisions in Eva? Was it a separate scheme/python module that analyzed SequentialAnd?
While i'm at it, I can't place some components in your architecture:I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in practice what were they used for?
Finally, in practice what does PLN do/have more than URE?
if i didn't want interactions with humans could i do it differently?
A certain variation of the sensor values already represents "the forward movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead.I don't understand why removing natural language implies having an inference devoid of "true understanding".
Stupid example: If I speak Italian with a French, neither of us understands the other. But a bottle remains a bottle for both and if I give him my hand he will probably do it too ... or he will leave without saying goodbye.
I'm probably missing something big, but until I don't bang my head against it, I don't see.This can be solved by carefully hand-crafting a chatbot dialog tree. (The ghost chatbot system in opencog was designed to allow such dialog trees to be created) Over the decades, many chatbots have been written. Again: there are common problems:-- the text is hard-coded, and not linguistic. Minor changes in wording cause the chatbot to get confused.-- there is no world-model, or it is ad hoc and scattered over many places-- no ability to perform reasoning-- no memory of the dialog ("what were we talking about?" - well, chatbots do have a one-word "topic" variable, so the chatbot can answer "we are talking about baseball", but that's it. There is no "world model" of the conversation, and no "world model" of who the conversation was with ("On Sunday, I talked to John about a bottle on a table and how to grasp it")Note that ghost has all of the above problems. It's not linguistic, it has no world-model, it has no defined representation that can be reasoned over, and it has no memory.20 years ago, it was hard to build a robot that could grasp a bottle. It was hard to create a good chatbot.What is the state of the art, today? Well, Tesla has self-driving cars, and Amazon and Apple have chatbots that are very sophisticated. There is no open source for any of this, and there are no open standards, so if you are a university grad student (or a university professor) it is still very very hard to build a robot that can grasp a bottle, or a robot that you can talk to. And yet, these basic tasks have become "engineering"; they are no longer "science". The science resides at a more abstract level.--linasI find the abstract level incredible, both in terms of beauty and difficulty!
Michele
Looks like it is 3pm EDT, fyi. See you then 😊
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/d49b362d-75ea-4c13-8652-7d0f7050734an%40googlegroups.com.
The link for the recording *WILL* be at https://xanatos.com/downloads/OpenCogNil20210402.mp4
Note the filesize is 598mB (1920 x 1080). It is uploading now and I live in the boondocks (which means we only have DSL here… it'll take a while.) Try it Saturday morning, it will be ready by then (probably ready by 8pm, EDT USA, but I believe in conservative estimates 😊 ).
Thanks Nils and all, I got some valuable ideas out of our conversation.
Good weekend to all,
Dave
From: ope...@googlegroups.com <ope...@googlegroups.com> On Behalf Of Michael Duncan
Sent: Friday, April 2, 2021 10:09 AM
To: opencog <ope...@googlegroups.com>
Subject: Re: AGI & Robotics & Sophia [was Re: New user [was Re: [opencog-dev] Problem in atom deletion from postgreSQL
it looks like 2pm edt, i'll be there!
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/d49b362d-75ea-4c13-8652-7d0f7050734an%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/00c701d72808%24669a8110%2433cf8330%24%40xanatos.com.
For those waiting, just now I was able to download David's recording. Again, great talk Nil! On Fri, Apr 2, 2021 at 2:37 PM Dave Xanatos wrote: > The link for the recording **WILL** be at > https://xanatos.com/downloads/OpenCogNil20210402.mp4 > > > > Note the filesize is 598mB (1920 x 1080). It is uploading now and I live > in the boondocks (which means we only have DSL here… it'll take a while.) > Try it Saturday morning, it will be ready by then (probably ready by 8pm, > EDT USA, but I believe in conservative estimates 😊 ). > > > > Thanks Nils and all, I got some valuable ideas out of our conversation. > > > > Good weekend to all, > > > > Dave > > > > > > > > *From:* ope...@googlegroups.com *On Behalf Of > *Michael Duncan > *Sent:* Friday, April 2, 2021 10:09 AM > *To:* opencog > *Subject:* Re: AGI & Robotics & Sophia [was Re: New user [was Re: > [opencog-dev] Problem in atom deletion from postgreSQL > > > > it looks like 2pm edt, i'll be there! > > https://time.is/EET
> > > > > > On Thursday, April 1, 2021 at 3:55:12 PM UTC-4 Nil wrote: > > Alright, so the time is > > Friday 2 Apr, 9pm EET (3pm EDT, if I'm correct) > > and the place is > > https://meet.jit.si/proto-agi > > Everybody is invited. > > Nil > > On 4/1/21 9:27 PM, Michele Thiella wrote: > > 25.5 from now, i hope :) > > > > Il giorno giovedì 1 aprile 2021 alle 19:29:39 UTC+2 logi...@gmail.com > ha > > scritto: > > > > 9pm EET works for me.. is that 1.5 hours from now or 25.5 hours > > from now? > > > > On Thu, Apr 1, 2021 at 8:19 AM Michele Thiella
> > wrote: > > > > Could it be around 9pm EET? > > it's a completely different time but should it be available for > > everyone? > > > > Michele > > Il giorno giovedì 1 aprile 2021 alle 16:29:53 UTC+2 Nil ha scritto: > > > > Sure! The place is > > > > https://meet.jit.si/proto-agi > > > > the time is > > > > 10:45am EET > > > > Unfortunately probably too early if you're in the US. > > > > Michele, maybe we could do a last minute change to fit the > > US timezone > > as well? With the risk of adding confusion though. > > > > I'll try to record the call, BTW. > > > > Nil > > > > On 4/1/21 5:08 PM, Douglas Miles wrote: > > > May I sit in on the meeting as a fly on the wall? > > > If so, when/how shall I connect? > > > > > > Thanks in advance! > > > Douglas Miles > > > > > > On Thu, Apr 1, 2021 at 2:52 AM Michele Thiella > > > > >> wrote: > > > > > > Hi Nil, > > > you're right! currently EET corresponds to the Italian time! > > > Great, then I might be a few minutes late because I have > > a lesson > > > first. But surely 10.45am EET can work! > > > > > > Also for me, no problems for those who want to join! > > > Thanks for the PLN link. See you tomorrow. > > > > > > Michele > > > Il giorno mercoledì 31 marzo 2021 alle 14:40:38 UTC+2 Nil > > ha scritto: > > > > > > Hi Michele, > > > > > > On 3/27/21 12:12 PM, Michele Thiella wrote: > > > > Is there any recommended book/paper to study before the > > code > > > of PLN rules? > > > > > > Search for Probabilistic Logic Networks in > > > > > > > > > https://wiki.opencog.org/w/Background_Publications#Books_Directly_Related_to_OpenCog_AI
>> > > > > > > > > > > > > > > > > > > > Finally, in practice what does PLN do/have more than > > URE? > > > > > > > > The URE is a generic rewriting system, that needs a > > rule set to > > > > operate. > > > > > > > > See for more info > > > > > > > > https://wiki.opencog.org/w/Unified_rule_engine > > > > > > > > > > > > > > > > >> > > > > > > > > Such rule set can be PLN, which has been specifically > > > tailored to > > > > handle > > > > uncertain reasoning > > > > > > > > https://github.com/opencog/pln > > > > > > > > > > > > > > >>
But then, without a "true-AGI" learning, I'll never have a "true-AGI" knowledge base and without that I'll not be able to continue, right?I don't understand the question.Why work for point 1 if point 2 is a prerequisite?It's not a pre-requisite!It seems like a no-win situation. Maybe I'm just a pessimist!I don't understand.There will be another way... In the end, our knowledge base was also helped by our parents in some way.? I don't understand what our parents have to do with this...
The primary benefit of scheme is that it is functional programming, and learning how to code in a functional programming language completely changes your world-view of what a program is, and what software is. If you only know C/C++/java/python, then you have a very narrow, very restricted view of the world. You're missing a large variety of important concepts in software. Yes, learning functional programming is "good for you".
Can I ask you to say something about tree of decisions in Eva? Was it a separate scheme/python module that analyzed SequentialAnd?No, it was just plain Atomese.Many Atoms have an execute method (actuall, all Atoms have an execute method, but it is non-trivial on only some of them.)The execute method on SequentialAnd simply steps through each Atom in it's outgoing set, and asks "are you true?" -- by calling execute, and seeing if it returns "true". If some atom in the outgoing list returns "false", then SequentialAnd stops and returns false. Otherwise, it continues till it reaches the end of the list, and then returns true.There is no "external module" to perform this analysis.While i'm at it, I can't place some components in your architecture:I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in practice what were they used for?I used MOSES to analyze medical notes from a hospital (free-text doctor and nurses notes) and predict patient outcomes. Some other people used MOSES to try to predict the stock market. Ben/Nill used it to hunt down genes that correlate with long life.OpenPsi was used as an inspiration for a kind-of combined prioritization-plus-human-emotion-modelling system. It was, still is problematic, for failing to separate these two ideas. There are many practical problems in AtomSpace applications that lead to a combinatorial explosion of possibilities, and one part of open-psi seems to be effective in deciding which of these possibilities should be explored first. Unfortunately, the design combined it with a really terrible model of human psychology, and this lead to a mass of confusion that was never fully resolved. it doesn't help that the creator of micro-psi came back and said that open-psi has no resemblance to micro-psi whatsoever. There are some good ideas in there, but the implementation remains problematic.Finally, in practice what does PLN do/have more than URE?I suppose Nil answered this already, but ... PLN defines a certain specific set of truth-value formulas. URE doesn't care about truth value formulas.URE can chain together rules, -- arbitrary collections of rules. PLN is a specific collection of rules, and they are not only specific rules, but they are coupled with specific formulas for determining the truth value.So, for example, consider chaining implications: If A implies B and B implies C then A implies C. This is a "rule" that recognizes an input of two pairs (A,B) and (B,C), and creates the pair (A,C) if the truth of A is T. it marks the truth of C as being T. A variant of this is Bayesian deduction, where the truth values are replaced by conditional probabilities.URE doesn't care what kind of rule it is, or what happens to the truth values. The rules could be non-sense, and the formulas could be crazy, and URE would still try to chain them.
If your machine is incapable of talking, it would be hard to argue that it's smart. Now, dogs, cats, crows and octopi can't talk, and for centuries, some people (many people) believed they weren't smart. Well, now I think we all know better, but still, the best way to prove how smart or stupid you are is to open your mouth.if i didn't want interactions with humans could i do it differently?Well, you could build a self-driving car. But I don't think Elon Musk is claiming that FSD is AGI.A certain variation of the sensor values already represents "the forward movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead.I don't understand why removing natural language implies having an inference devoid of "true understanding".You know the expression "writing about music is like dancing about architecture"? Well, you could build a robot that dances, but you would have a hard time convincing anyone that its smart, that it's anything other than a clever puppet.Stupid example: If I speak Italian with a French, neither of us understands the other. But a bottle remains a bottle for both and if I give him my hand he will probably do it too ... or he will leave without saying goodbye.It's all very contextual. If you speak Italian, and you see a human, you assume that what you see has all the other properties of being a human. If you speak Italian, and you see a robot with a mechanical arm, you assume that it has all the typical properites of a robot: stupid and lifeless, just a machine.-- Linas
It was just a personal reflection. I mean that I cannot get a project to AGI without a learning algorithm (because the knowledge base would then surely be hand-crafted).
My idea is that maybe I wouldn't rule out supervised learning. Because human learning is sometimes guided by a teacher, who gives you the image of a horse and also tells you that it is a horse.
Ok so, in summary: either I make it talk or I have to invent another way to demonstrate his intelligence!
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2680e980-547d-46e2-8404-d272a25d2659n%40googlegroups.com.
Planning in my example didn't work due to certain assumptions being made in URE. So let's say URE comes up with a nested bindlink like that:(ExecutionOutputLink(stack c a)...(ExecutionOutputLink(stack a b)(ExecutionOutputLink (pickup a) ) )When it evaluates (stack c a) all atoms introduced by(stack a b) and (pickup a) are present in the atomspace. So preconditions of stacking c on a are not satisfied(a is both being 'held' and not 'held'). Probably there is a simple workaround like placing all new facts into separate ContextLink. Such simple planning problems are more naturally expressed in new opencog-hyperon,
so I lost the motivation for turning URE into a planner.
Besides all planners rely on heuristics to guide the search, so even if you will make URE work on this particular small example you'll have to do some more work to integrate them in URE.
--ср, 28 апр. 2021 г. в 12:38, Michele Thiella <acikoa...@gmail.com>:Hello Nil, hello Linas and hello everyone,First of all, Nil, I have spoken to my supervisors and unfortunately I will not be able to develop your ROCCA project.I'll try to follow the developments, since you explained to me how the code works (thanks again).Instead, I focus on solving the blocksworld problem (and then expand the project by adding communication)So, I'm studying how URE inference works.My test repository can be found here: https://github.com/raschild6/blocksworld_problemI don't understand what I'm doing wrong ..- Why the backward chaining fails to resolve the goal?- Also, I think I don't quite understand how fuzzy conjunction introduction and elemination rules work.I have other questions related to the URE log but for now I would like to understand these.(I don't know if it's better to open a new conversation)Thanks again for your help, sorry for the inconvenience!Michele--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2680e980-547d-46e2-8404-d272a25d2659n%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAFj%2Bw-tCpU-iRkAjUxEk_EbKHovCq84p%2B%3D0B6EQuiNggF5D16Q%40mail.gmail.com.
Now, what I say above is "easy to say" but is "hard to do" -- implementing what I suggest is a large project. But then, in software, nothing is free. facebook and google and amazon employ thousands of engineers because writing good software is hard. Imagining that you can create a new planner out of thin air in a few months is not a realistic dream. Don't repeat history; learn from it.
Now, what I say above is "easy to say" but is "hard to do" -- implementing what I suggest is a large project. But then, in software, nothing is free. facebook and google and amazon employ thousands of engineers because writing good software is hard. Imagining that you can create a new planner out of thin air in a few months is not a realistic dream. Don't repeat history; learn from it.
As I understand what is being proposed here is a student research project not a large-scale engineering project...
Exploring constraint-satisfaction-based planning makes sense, but for some planning domains this approach may not be best. E.g. if you're planning in a highly dynamic environment (as faced say by robots moving around in a house or on the street) then I'm not sure the available constraint satisfaction algos can deal well w/ the needed real-time plan updating...?
My 2 tokens worth...
I was referring to Michele's student project which is clearly not
aimed at scalable production code... whether Michele uses Hyperon or
Original OpenCog, it's a student research experiment on
BlocksWorld....
Hyperon is ultimately intended as an alternative to current OpenCog
Atomspace, yes. However the current crude Hyperon prototype code is
definitely NOT intended as an alternative to Atomspace -- yet can
still be used for research experimentation.
> (1) ASP will solve most planning and constraint satisfaction problems in milliseconds. So you can do a 60-frames-per-second update rate if you wish.
Ah cool, good to know!!! My intuition on this is obsolete,
apparently ;p ... will look up some references...
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36MR7FJyUyeTjQtgnUufb%3DA7Kyd%3DTfJ3%3Dtp1k-JxVbr-Q%40mail.gmail.com.
Could you explain (with enough detail) how it is more natural? I am very much interested in allowing natural expressivity in the atomspace.
(= (if False $then $else) $else)
(= (if True $then $else) $then)
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35OTMSf65_M9zOtCjjS9R432HfdbgYU8Q8uOjmxT05g0Q%40mail.gmail.com.
> Hyperon is ultimately intended as an alternative to current OpenCog
> Atomspace, yes.
Actually I should be more precise here.
It is not yet clear whether we'll end up replacing the current
Atomspace in the Hyperon system... we are open to doing so but this
isn't yet decided...
-- a static pattern matcher (which however manages bound variables
fairly sophisticatedly, and can match variables against whole
sub-metagraphs...), which then also gets an efficient implementation
for execution against distributed Atomspaces
an efficient implementation
for execution against distributed Atomspaces
-- an Atomese language which is used to do a lot of the more
programmatic stuff done in the current Pattern Matcher,
as well as a
lot of the stuff habitually done in Scheme scripts in current OpenCog
usage
This is different from the design pattern used in the current PM,
which embeds an awful lot of sophisticated program-control
functionality into the Pattern Matcher itself (thus making it way more
than a pattern matcher in any conventional sense)...
The current PM seems like much more complex code than the current
Atomspace, but maybe I'm missing something subtle in the latter?
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com.
I know nothing about the blocksworld problem, so I cannot help directly. Indirectly, you can use (cog-report-counts) to monitor the number of atoms in the atomspace -- I typically see an average of about 1KB or 2KB per atom. So, a few GB is enough for millions of atoms, normally. This will give you a hint of what might be going on there.
The only "problem" is that URE uses some temporary atomspaces; those are not included in the count. The URE also mallocs structures that are not part of the atomspace.There is a third but unlikely issue -- guile garbage collection not running often enough. Take a look at (gc-stats) to get info, and (gc) to manually run garbage collection. It's unlikely this is a problem, but there were issues with older guile-- say, version 2.0. I'm hoping you are on version 3.0, or at least version 2.2.Perhaps @Nil Geisweiller can help with URE ram issues.--linas
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com.
Great I really needed some monitoring! I'll take a look at these numbers!Another thing I forgot: is there a way to get the inference tree related to a solution obtained from BC?
I saw that there was a scheme code somewhere but I didn't understand how it worked. Thank youPS. Blocksword Problem (very briefly) = some blocks on table, 4 action (pick-up, put-down, stack: block 1 above block 2, unstack: pick-up block 1 which is above block 2), objective: to build a tower of blocks
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com.
1 test)((ConceptNode . 8) (NumberNode . 2) (PredicateNode . 9) (SetLink . 5) (ListLink . 30) (MemberLink . 7) (ContextLink . 5) (AndLink . 67) (NotLink . 14) (PresentLink . 7) (VariableNode . 11) (VariableList . 6) (DefineLink . 7) (BindLink . 7) (EvaluationLink . 44) (TypeNode . 6) (TypeChoice . 2) (TypedVariableLink . 11) (EqualLink . 14) (ExecutionOutputLink . 7) (SchemaNode . 2) (DefinedSchemaNode . 7) (GroundedSchemaNode . 6) (InheritanceLink . 7) (ExecutionLink . 2))
- (gc-stats) in the 2 test when I ran out of RAM:((gc-time-taken . 334108055) (heap-size . 6316032) (heap-free-size . 1228800) (heap-total-allocated . 54519584) (heap-allocated-since-gc . 1691024) (protected-objects . 16) (gc-times . 22))I'm trying to understand their meaning. Do these numbers tell you something?
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/b0effc4c-a09e-4f55-a00c-a4e16bb17f24n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37r38daLXGaN%2BJmYmm0KgDkacnV7a60pE%3D%2BE0M5DY7s4g%40mail.gmail.com.
It was my suggestion to represent search states with ContexLinks, somehow like that:
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAFj%2Bw-sWotMa6CPpMiUDMywDS0P-4FDji75YnNoS6M3jwoznyQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA374OfZazwhmFjXseh0PRGN5NBwB1-6C6BdXLqkxx5gCtw%40mail.gmail.com.
There is some collision in terminology.The classical planning problem(like blockworld) is defined by a tuple <S, s0, S_goal, A, f>, where S is set of states, s0 - initial state, S_goal - set of goal states, A - set of actions, f - state transition function, f accepts a state and an action and outputs a state. The task is to find a sequence of (state, action) pairs leading from the initial state to the goal state.
Unlike FSM states in planning have structure which enables informed search. And there are too many states, if we work in a boolean domain like blockworld then there are 2^(bit-length of the state), we can't build FSM for such problems.
So I meant to use ContextLinks to represent elements of S. While state transition rules in Michel's example are represented by BindLinks. There is one BindLink for each action, which is unlike the definition, maybe it would be better to have one BindLink for the computation of the next state, but I don't see a simple way to write it in pure atomese either. So I suggest working with ListValue for states + EvaluationLinks with grounded predicates.
I don't see a simple way to write it in pure atomese
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAFj%2Bw-tffYSK9cr9-en7EDqqmDi83nS%3DVRbfksRA1gf%3D00aopg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/b0effc4c-a09e-4f55-a00c-a4e16bb17f24n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/c9d4af3c-3117-4ea0-8cd2-bb885f108e1cn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/01eedcde-0a95-4e4e-b0ad-65ffe98dca45n%40googlegroups.com.
"The book is on the table." not create the atom(EvaluationLink(PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" )(ListLink(ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa")(ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))
Or rather, it creates it. But the ListLink contains only the table ... why?
- I tried to use StateLink
backward chaining
- pushing / popping atomspaces:that's essentially what I'm doing now.Model-based rules all work well. So I copy the current atomspace into a "temporary" one,
The algorithm is a bit heavy and the tree explodes quickly but it is conceptually correct and working.
I don't think I understand the application of modal logic (for lack of knowledge I think)
But following one of the examples of the link: https://wiki.opencog.org/w/RelEx2Logic_representation"The book is on the table." not create the atom
(EvaluationLink(PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" )(ListLink(ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa")(ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))
Or rather, it creates it. But the ListLink contains only the table ... why?
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/ee4b8456-828f-4fce-9f78-c196590bf135n%40googlegroups.com.
Yes, StateLink is useless with backwards-chaining. It can only work with forward-chaining.
A minor performance note: using push/pop might be slightly faster, by avoiding a copy. (you can also manually push/pop with cog-new-atomspace, cog-set-atomspace! and stuff like that.)
This is a generic problem with backwards-chaining: the algorithms are heavy, slow, and have combinatoric explosions. This has been known since the 1980's and has been the subject of extensive academic research, and, no doubt, dozens of PhD thesis.
This is why I keep yabbering about answer-set programming (ASP) and the Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers, much or most or all of the combinatoric explosion can be avoided. Or rather, the SAT solvers prune the graph in such a way that the explosion is avoided.Exactly how to make use of this whiz-bang technology in the AtomSpace remains an open research question.
I don't think I understand the application of modal logic (for lack of knowledge I think)Backwards chaining is a special case of modal logic.Very roughly, modal logic is about reasoning over beliefs (If John believes X, then John should also believe Y" ... or rather "it is possible that John believes X, in which case, it is necessarily true that John believes Y") -- it is a form of reasoning over possible universes, where certain facts end up being necessarily true.The backwards-chaining variant of this is "if block X is on top of block Y, then it is necessarily the case that Y is on top of the table or that Y is on top of Z" and backwards chaining is just "find all possible universes where block X is on top". (replace "John believes X" with "X is on top"; the "possible universes" are those where the block stack correctly.)There was some GSOC summer-school effort to map the URE to modal logic, but it wasn't accomplished.
But following one of the examples of the link: https://wiki.opencog.org/w/RelEx2Logic_representation"The book is on the table." not create the atom
(EvaluationLink(PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" )(ListLink(ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa")(ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))
Or rather, it creates it. But the ListLink contains only the table ... why?I assume it's just a bug. The R2L code does not have an active maintainer. Open a bug report. I'll look at it. If it's real easy, I'll try to fix it. I hope it will be easy, and not some ugly mess.
A minor performance note: using push/pop might be slightly faster, by avoiding a copy. (you can also manually push/pop with cog-new-atomspace, cog-set-atomspace! and stuff like that.)I hadn't seen push and pop so i was using (cog-new-atomspace). Actually, the algorithm is in python so I do Atomspace () and for the copy I extract the atoms from one and rerun them in the other (probably with decreased performance).
This is a generic problem with backwards-chaining: the algorithms are heavy, slow, and have combinatoric explosions. This has been known since the 1980's and has been the subject of extensive academic research, and, no doubt, dozens of PhD thesis.I saw it in old tests.. Anyway, my algorithm is more or less a guided forward chaining, but going up or down the tree often makes no difference.
This is why I keep yabbering about answer-set programming (ASP) and the Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers, much or most or all of the combinatoric explosion can be avoided. Or rather, the SAT solvers prune the graph in such a way that the explosion is avoided.Exactly how to make use of this whiz-bang technology in the AtomSpace remains an open research question.
I've studied the theory of the SAT problem and some pseudocodes but I don't know most of ASP. When I have more time I will try to learn about it because solve the combinatorial explosions seems like a nice achievement.
1) StateLink may be buggy with push-pop.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/74e5cb86-a9da-4b0a-904e-9eba8b8889e8n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/c963572b-0e79-475c-bac9-2bf81a34155dn%40googlegroups.com.
Maybe I answered myself.Lg-atomese doesn't even do the "2logic" functionality
Instead, for relex2logic there is no longer backward compatibility, right?