Problem in atom deletion from postgreSQL

855 views
Skip to first unread message

Michele Thiella

unread,
Mar 8, 2021, 4:58:12 AM3/8/21
to opencog
Goodmorning everyone,
i have a simple problem about deleting atoms in postgresql.
According to what I understand, to delete an atom saved in postgres I should use:
(cog-delete (Concept "asdf"))
The command is successful, the atom is removed from the atomspace but not from the postgresql database. Could someone kindly tell me why?
The prostgresql backend should be configured correctly, according to the guide on the wiki ... saving new atoms in the database works.
Thanks in advance, I apologize for the inconvenience

Linas Vepstas

unread,
Mar 10, 2021, 5:35:42 PM3/10/21
to opencog
Hi Michele,

Just around the same time that you sent this message (or a day before?) I spotted and fixed a problem with atom deletion. So, please try rebuilding and reinstalling the atomspace.  (git pull; cd build; make -j; sudo make install) -- let me know if that solves the problem.

Note also, the proper name is cog-delete!  (with an exclamation mark at the end) I may have removed the backwards-compat layer that allowed it to work without the exclamation mark..

Note also: there is a RocksDB backend too. It might be easier to use (no config required). In some synthetic benchmarks, its 2x or 3x faster than postgres. In the one "real-life" app that I'm using, its 10%-50% slower. Go figure.  Anyway its at https://github.com/opencog/atomspace-rocks

(There is also a network server backend: https://github.com/opencog/atomspace-cog ... the README explains more)

-- Linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/8dd1cfa3-0e8a-4d85-ba2a-6b38fcd2aea7n%40googlegroups.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Michele Thiella

unread,
Mar 11, 2021, 1:42:53 PM3/11/21
to opencog

You were right! After the pull everything works! It works so well that it has kept the backwards-compat layer and it works even without the exclamation mark haha

Regarding RockDB, I tried to install it but now I don't remember at what point I stopped and why .. but I definitely had space problems. Too bad because it inspired me a lot as storage, anyway 50% slower is really a lot!
However (unfortunately) I have skills with postgresql so for now I am satisfied, since everything works.
  
Thanks a lot for the answer! Good work!

Michele Thiella

Linas Vepstas

unread,
Mar 20, 2021, 3:04:05 PM3/20/21
to opencog, Michele Thiella
So, out of curiosity, what are you using OpenCog for? What are you attempting to do? You're new here, you should introduce yourself!

-- Linas

Michele Thiella

unread,
Mar 21, 2021, 7:10:43 AM3/21/21
to opencog
You're right, I didn't introduce myself properly! I mentioned my plans in response to one of the previous conversations but it was the wrong place so it got lost.
Let's start again in the right way:

Hello everyone, I'm Michele Thiella from Italy (Padua).
I am about to graduate in Computer Engineering at the University of Padua.
I have always been passionate about artificial intelligence and, more generally, about everything that has not yet been invented/discovered/solved.
About 6 months ago I chose the path for my master's thesis: AGI. I started reading around until I got to you and I was blown away.
I presented your work (roughly) to my Thesis Supervisors, trying to get approval for a thesis involving OpenCog.
And here we are.

For my thesis the proposal would be to use OpenCog to do TAMP.
More precisely, using a simplified version of Sophia's architecture, switching Blender with Ros and Sophia with a much simpler robot.
Leaving out the emotional sphere of the robot and aiming at the resolution of objectives, perhaps achieving cooperation between robots through a single knowledge base.
Or at least that was the idea.

I don't think I explained it the way I wanted but I hope it goes well.

Linas Vepstas

unread,
Mar 22, 2021, 2:22:15 PM3/22/21
to opencog
Wow!  Cool!  I'm excited!  In that case, I should mention a few things:

* Most of the Sophia code is in the opencog github repos.

* For trademark reasons, the open-source code should be called "Eva" not "Sophia", we decided that was the proper way to avoid trademark issues.

* Anyway, the code is now many years out of date; Hanson Robotics has moved to a proprietary code base, which is not available.

* The code "used to work"; there are docker containers that pull in opencog, blender, the Eva blender head model with all of its animations, USB-video for vision, microphone+text-to-speech for sound.  The problem is that the code in the opencog repos was moved around, changed, renamed, so the docker containers are surely broken. I could show you (or anyone else who is interested) in how to get that working again.

* There is little/no actual "AGI" in that code base. What it does do is to hook up blender and opencog (and video, etc.) to ROS, so that the Atomspace contains a description of what the camera sees (faces, via face detection), and so that the AtomSpace can control robot movements (smile, blink, turn left, ...).  The actual animation is done with Atomese scripts ("behavior trees") that encode things like "when a new person becomes visible, turn to face them (coordinates x,y,z) and smile".  She's actually calibrated, so that it looks like she is really looking at you (the blender model turns to face you) based on the USB camera coordinates; she'll track both with eyes and with head.

* There was some prototype code so that you could verbally command her ("Eva please smile" and she would, "Could you turn left?" and she would) and some code to ask her what her current state is ("Eva, what are you doing?" "I'm looking left")  But that prototype code is not "AGI" -- it is scaffolding to  hook up vision and motor control to a "model of the world" and a "self-model", in such a form that natural language could access that model ("Eva what are you doing?") and to change that model ("Eva, blink three times").  The model, the representation of the world, needs to be in a form where both the language subsystem, and the other subsystems (vision, motors) could work with it.

* The plan was then to grow that model using automated learning techniques; this would be the actual AGI. However, that is still a long ways off. I'm working on learning right now, but it is not yet in a form where it could be hooked up to a robot (and is surely many years away; it's a big task, and I can only work part-time on it, and I've been unable to find funding for it, or to recruit any significant amount of help.)

Anyway, that's the status. I'd be thrilled to talk more!  I hope your master's proposal gets accepted!

-- Linas



Nil Geisweiller

unread,
Mar 23, 2021, 1:19:25 AM3/23/21
to ope...@googlegroups.com
Forwarding to opencog as I forgot to reply-all.

-------- Forwarded Message --------
Subject: Re: New user [was Re: [opencog-dev] Problem in atom deletion
from postgreSQL
Date: Tue, 23 Mar 2021 07:15:10 +0200
From: Nil Geisweiller <ngei...@gmail.com>
To: Michele Thiella <acikoa...@gmail.com>

Hi Michele,

I'm working on something that might be relevant to your work, see

https://github.com/opencog/rocca
https://github.com/singnet/rocca (mirror)

it's a WIP and advancing slowly due to myself being involved with other
projects, but it has IMO much potential, especially in term of
leveraging OpenCog reasoning capabilities, if that's what your into.

Nil
> <https://groups.google.com/d/msgid/opencog/8dd1cfa3-0e8a-4d85-ba2a-6b38fcd2aea7n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the
> Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to opencog+u...@googlegroups.com.
>
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Michele Thiella

unread,
Mar 23, 2021, 7:01:36 AM3/23/21
to opencog
- In response to Linas:
I'm excited for a long time!! Thanks for the introduction, I read a lot of OpenCog in this period, the wiki is big and you have published a lot of papers.
 
"For trademark reasons, the open-source code should be called "Eva" not "Sophia", we decided that was the proper way to avoid trademark issues."
Now I understand why Eva exists.
 
"The code "used to work"; there are docker..."
I have seen the splitting of OpenCog into the various modules and i'm currently working without docker. I would avoid an Eva ready-to-use, I had looked at the various files and I plan to extract parts of the code and reorganize them for my project.

" There is little/no actual "AGI" in that code base."
Regarding AGI, I am aware of the distance that separates us from the goal. But in my simplistic view (don't kill me for this phrase), human intelligence is just an excellent inference on an excellent knowledge base maintained by excellent learning. Excellence is a matter of progress.
The generality of the atomspace is incredible, if all the learning algorithms spoke in Atomese and collaborated on the atomspace, it would be a good starting point IMO! If I'm not mistaken, the interface with external libraries is one of your goals in OpenCog Hyperon.

This is to say that I think you are closer to AGI than it looks.


- In reply to Nil: (on Slack i'm named Raschild)
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.
Two quick questions:
1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym? 
2) Are Values ​​already usable instead of OctoMap and SpaceTime server?

In conclusion:
* My master's proposal has already been accepted with the proviso to get a feasible project. So, aim for a goal and try to achieve it. If so, excellent; if not, show why it failed (I will try to avoid it).

* The direction of the project is still incomplete. Unfortunately, i can't figure out if it takes me 1 day, 1 month or 1 year to understand/implement a certain code.
I started with the reasoning: I am currently learning the inference rules and how they work with the atomspace, I have seen part of the examples in ure and pln and I was trying to understand the blocksworld problem developed by Anatoly Belikov here:

* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.


Michele

Nil Geisweiller

unread,
Mar 23, 2021, 10:12:54 AM3/23/21
to ope...@googlegroups.com, Michele Thiella
On 3/23/21 1:01 PM, Michele Thiella wrote:
> - In reply to Nil: (on Slack i'm named Raschild)
> I had seen the beginning of the work and it is very interesting. In the
> next few days I will look at the current state.
> Two quick questions:
> 1) How complicated is it to work directly with Ros + Gazebo compared to
> Malmo and Gym?

I don't know as I've never used Ros or Gazebo. Working with Malmo and
Gym is fairly easy but that's cause they have been designed this way,
and as a result they are fairly limited. For instance the communication
protocol is completely synchronous, it's certainly not something you'd
want to use to control a robot.

> 2) Are Values ​​already usable instead of OctoMap and SpaceTime server?

At the current stage ROCCA uses neither values, nor even octomap or
spacetime server. Everything, including spacetime events, lives in the
atomspace, which is extremely inefficient but is not my concern for now.
My concern is to build an agent that makes decisions as rationally as
possible, in unknown and uncertain environments.

> In conclusion:
> * My master's proposal has already been accepted with the proviso to get
> a feasible project. So, aim for a goal and try to achieve it. If so,
> excellent; if not, show why it failed (I will try to avoid it).
>
> * The direction of the project is still incomplete. Unfortunately, i
> can't figure out if it takes me 1 day, 1 month or 1 year to
> understand/implement a certain code.
> I started with the reasoning: I am currently learning the inference
> rules and how they work with the atomspace, I have seen part of the
> examples in ure and pln and I was trying to understand the blocksworld
> problem developed by Anatoly Belikov here:
> https://github.com/noskill/ure/tree/planning/examples/ure/planning
>
> * Ideally my goal was to extend the "model of the world" to work more
> with objects than people and to extend the "self-model" to execute
> navigation and manipulation plans. In all of this, I haven't yet
> explored the learning.

Based on what you're saying I think ROCCA would be a good fit. Consider
of course that it is very early stage. I don't mind to semi-mentor you
as long as you're somewhat autonomous (which you seem to be).

Do you want to have a call (say Friday, as it's ROCCA day for me)? I
could walk you through the code, to help you decide whether you want to
work on it, or else work on Eva.

Nil

>
>
> Michele
>
> Il giorno martedì 23 marzo 2021 alle 06:19:25 UTC+1 Nil ha scritto:
>
> Forwarding to opencog as I forgot to reply-all.
>
> -------- Forwarded Message --------
> Subject: Re: New user [was Re: [opencog-dev] Problem in atom deletion
> from postgreSQL
> Date: Tue, 23 Mar 2021 07:15:10 +0200
> From: Nil Geisweiller <ngei...@gmail.com>
> To: Michele Thiella <acikoa...@gmail.com>
>
> Hi Michele,
>
> I'm working on something that might be relevant to your work, see
>
> https://github.com/opencog/rocca <https://github.com/opencog/rocca>
> https://github.com/singnet/rocca <https://github.com/singnet/rocca>
> <https://groups.google.com/d/msgid/opencog/8dd1cfa3-0e8a-4d85-ba2a-6b38fcd2aea7n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/8dd1cfa3-0e8a-4d85-ba2a-6b38fcd2aea7n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> >
> >
> >
> > --
> > Patrick: Are they laughing at us?
> > Sponge Bob: No, Patrick, they are laughing next to us.
> >
> >
> > --
> > You received this message because you are subscribed to the
> > Google Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from
> > it, send an email to opencog+u...@googlegroups.com.
> >
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com>
>
> >
> <https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/f5726803-91a0-405d-99b4-deefeb48486dn%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> >
> >
> >
> > --
> > Patrick: Are they laughing at us?
> > Sponge Bob: No, Patrick, they are laughing next to us.
> >
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to opencog+u...@googlegroups.com
> > <mailto:opencog+u...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com>
>
> >
> <https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/94d7c89f-5547-4ef8-9de6-668972bfb385n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/e2f4037e-daa8-4308-99dd-74f41130745bn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/e2f4037e-daa8-4308-99dd-74f41130745bn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Linas Vepstas

unread,
Mar 23, 2021, 1:10:24 PM3/23/21
to opencog
Hi Michele,

On Tue, Mar 23, 2021 at 6:01 AM Michele Thiella <acikoa...@gmail.com> wrote:

" There is little/no actual "AGI" in that code base."
Regarding AGI, I am aware of the distance that separates us from the goal. But in my simplistic view (don't kill me for this phrase), human intelligence is just an excellent inference on an excellent knowledge base maintained by excellent learning. Excellence is a matter of progress.

Oh, please let me kill you! That's where all the fun is!  Based on discussions with many people, there is a wide-spread misunderstanding of what AGI is or how it might be achieved. Although what you said is superficially, simplistically correct, I want to point out that "excellence" cannot be achieved by hand-crafting knowledge bases. Very few people seem to understand this, and seem to believe that somehow just slapping a bunch of parts together will result in AGI. That designing AGI is like designing an airplane, that it's just a matter of "excellent design" and it will fly by itself. This is not the case.

Thus, I was trying to be careful in distinguishing the "scaffolding", which is hand-crafted, from actual AGI type work. The scaffolding is needed to bring data into a format where an AGI type system can interface with it.  At every point of design, you have to ask: is this piece of code just some more hand-crafted (human-crafted) special-case code that is being used to convert the external world into a form that a computer algorithm can interact with? Or is this piece of code "AGI" (or as close to AGI as we can get right now)?  So I am trying to draw a contrast between "those things that are AGI" and "ancillary support services".

This is to say that I think you are closer to AGI than it looks.

Thank you.  But my personal view is that the part of opencog that is closest to true AGI is the code and algos in opencog/learn and opencog/generate. However, from what I can tell, no one else shares my view; or at least, Ben doesn't, and he's the  most important one to convince.


- In reply to Nil: (on Slack i'm named Raschild)

A bunch of people convinced me to hang out on discord, so that is where I am these days.  From what I can tell, no one working on opencog is using slack.

I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.
Two quick questions:
1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym? 

I have only used ROS. The design is straight-forward.  If a ROS event comes in (some face is perceived; there is some loud noise, other environmental change) there is a python snippet (ROS is easiest to use with python) that converts that event into Atomese, and sends that Atomese to the cogserver (the cogserver is a network server, nothing more). So for example, a loud sound might be converted to `(StateLink (PredicateNode "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, processing does whatever you've set it up to do with this kind of information.  Exactly how sophisticated you want to be is up to you.

For output, it's even easier: `(cog-evaluate! (EvaluationLink (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments ...))))` which calls a python function "twiddle_ROS_message" to send some data somewhere in ROS.

My remarks about "excellent design" and "AGI" above means that python wrappers for converting ROS data to Atomese should be minimal, or that they should do just enough to bring in external information into the AtomSpace. You want to avoid a game of writing large, complex python scripts. So when you ask "How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?" The answer should be "about the same" and "not complicated" because there should be only minimalistic shims to convert to/from Atomese and the message formats these other systems use.  If you are creating something complicated in these systems, you are not doing AGI, you are doing robotics.


2) Are Values already usable instead of OctoMap and SpaceTime server?

For rocca, I have no clue. For the core AtomSpace, Values are a fully complete and functional component. They work.

The OctoMap and SpaceTime servers are kind-of broken, kind-of useless in their current form. I wanted these servers to implement prepositions: above, below, next to, behind, in front, left of, right of, bigger, smaller.  Ths would allow a natural-language subsystem (or a reasoning subsystem) to work "naturally". However, this implementation was never done.  Currently, all that octomap/spacetime do is to just store xyz 3D values. This is fairly useless, as there are dozens of external systems used in robotics that do 3D much better and faster: point-clouds, SLAM simultaneous localization and mapping, and so on. 

Again: scaffolding vs AGI. So, 3D location is part of the external world, and the scaffolding must interface to the external world, and take 3D data and convert it into a format that the AGI code can operate on.  If you have AGI code that can work directly with 3D point clouds, then great! No scaffolding is needed! If you (like me) have proto-AGI code that wants to work with symbolic-natural-language, then some scaffolding is needed to convert point-clouds into prepositions.  Some day in the future, maybe we can remove some of the scaffolding.

However, up until now, almost all work that has been done, that is being done, is on scaffolding. If you are not careful, you will find yourself doing the same. This is not bad: it's educational, and it's important, and it helps show where the boundary is between the scaffolding and the AGI. -- if nothing else, this is called "learning at the school of hard knocks" -- "I built one and it didn't work, but I learned something". At the forefront of knowledge, that's the only school that is open. That's what science is.


I started with the reasoning: I am currently learning the inference rules and how they work with the atomspace, I have seen part of the examples in ure and pln and I was trying to understand the blocksworld problem developed by Anatoly Belikov here:

Reasoning and inference is a very dangerous place to start, and may kill your project before it even gets started. There are several reasons for this.

* Reasoning presumes that you have already decided on a representation for your data (either hand-crafted it, or automatically learned, somehow.) Once you have this representation, then you can reason on it. But do you have this representation? No, you don't. You might borrow one from blocks-world, or borrow the one from Eva, or borrow the one from rocca (or the one from agi-bio, which represents DNA, RNA and proteins).  You then have the problem of pulling external data and placing it into your representation, where "external data" is vision, sound, text, or RNA/DNA genetic sequences. This is scaffolding.

* Reasoning presumes that you have inference rules. Where did these come from? Did you hand-craft them? PLN has a bunch of hand-crafted inference rules that Ben and friends hand-crafted 10-15 years ago, and Nil has carefully implemented in C code. They work, kind-of, whenever you have a hand-crafted representation for your data that is PLN-compatible. Nil spends a lot of time, a huge amount of time (the last 10 years) getting the hand-crafted rules to fit with the hand-crafted representation, and to get reasoning working efficiently and quickly. But if your representation does not fit the PLN structure, then it won't work.  (None of my language work was ever able to fit with PLN. My new AGI work (at opencog/learn) will almost surely not fit with PLN; the goal there is to learn brand-new inference rules, instead of using the hand-crafted ones.)

* The actual implementation of the URE is "hard-core comp-sci" or maybe "good old-fashioned comp sci": its a set of algorithms to apply some rewrite rules to a network. There are many non-opencog systems that do something similar, such as SAT-solvers, constraint satisfaction systems, ASP-answer-set programming, the "lambda cube", higher-order logic, theorem-proving systems, etc. It's hard core, it's not easy.  Many of these systems are much much faster, and are much more flexible, *if* your data representation is not PLN, but is something else: e.g. boolean expressions or prolog-like assertions. So we are back again to "what is your internal model"?

For example, in robotics, for a robot inside an office building, a common inference task is "is the door open? If the door is open then roll through it, else grasp the door handle and open the door."  The standard grad-school robotics approach to solve this is to use ROS or something similar to "see" the door, and then to use ASP (answer-set programming) to perform very fast crisp-logic reasoning and inference. It works. It's what 90% of all university robotics departments use. It is reasoning and inference. It's not AGI.


* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.

For Eva, the self-model and world-model are all part of the same thing, and they were hand-crafted (not learned).  The goal was to interface language to movement and perception. The inspiration was to use concepts and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.

Getting this to work involved a sequence of rickety and fragile transformations: from sound to text (via google voice-to-text) which is inaccurate. From text to a parse-tree (via link-grammar). From parse-tree to the internal model. From the internal model to robot motion/action. Changing anything anywhere was both conceptually hard (no one else understood what the heck I was doing, including, among others, "the management" (Ben and David) and without management support, the going gets tough.)  Also, it was abstract enough and complex enough that other programmers were unwilling to learn how it worked, and so were unwilling to help.  If you personally  want to work on this, then be aware that it is abstract and complex. And fragile. (Part of the goal of "good engineering" is to compartmentalize the complexity so that it becomes "easy to use" and non-fragile. This code bases needed a little bit more "good engineering" than it ever got.)

My goal with the opencog/learn project is to automate all of the above, including the reasoning, inference, and world-model, but it is far away from that, so far. I think I know how to do these things, but now I have to ... do them.

-- Linas

Adrian Borucki

unread,
Mar 23, 2021, 8:46:51 PM3/23/21
to opencog
Just to throw in my 3 cents here, I have done some work on moving ROCCA to use the MineRL’s Gym API to access Minecraft, instead of using a separate Malmo wrapper.
It works in the sense that I can run the example code in that way and it simplifies the whole design somewhat, as interfacing with Minecraft is reduced to relying on a familiar structure of a Gym API.
I also have a setup where I can run the whole thing in a Docker container, so it’s almost reproducible (there is a minor file edit needed).

An idea that I wrote about on Discord was to bring in some unsupervised image segmentator (like MONet), train it on the MineRL dataset (they have a dataset of Minecraft traces) and then use that to have some
information about visual objects available to the agent. For now I am stuck a bit though, as sadly the loading code they provided for their dataset has memory leak issues, so I will have to write my own, it shouldn’t prove too difficult as 
I just need the video frames but I have to get around to implement it yet.

Michele Thiella

unread,
Mar 24, 2021, 9:50:19 AM3/24/21
to opencog
ok, lot of things here! 
I'm trying to learn more than 10 years of your work in few month. 

Il giorno martedì 23 marzo 2021 alle 18:10:24 UTC+1 linas ha scritto:
Oh, please let me kill you! That's where all the fun is!  Based on discussions with many people, there is a wide-spread misunderstanding of what AGI is or how it might be achieved. Although what you said is superficially, simplistically correct, I want to point out that "excellence" cannot be achieved by hand-crafting knowledge bases. Very few people seem to understand this, and seem to believe that somehow just slapping a bunch of parts together will result in AGI. That designing AGI is like designing an airplane, that it's just a matter of "excellent design" and it will fly by itself. This is not the case.

Thus, I was trying to be careful in distinguishing the "scaffolding", which is hand-crafted, from actual AGI type work. The scaffolding is needed to bring data into a format where an AGI type system can interface with it.  At every point of design, you have to ask: is this piece of code just some more hand-crafted (human-crafted) special-case code that is being used to convert the external world into a form that a computer algorithm can interact with? Or is this piece of code "AGI" (or as close to AGI as we can get right now)?  So I am trying to draw a contrast between "those things that are AGI" and "ancillary support services".

I'm starting to understand what you mean. 
Probably all the code that i'm thinking is "scaffolding". So, "bring data into a format where an AGI type system can interface with it". 
Maybe it's not clear to me what an AGI-code is. I had seen /learn and /generate only from readme, maybe because i found them hard at that time (and at this time, for sure). 
I agree that hand-crafting KB and "execellence" don't mix. But so, your proposal would be to achieved "execellence" by a knowledge base built by what? I suppose that learn and generate repos are the answer. I'll look better at those repos!

 
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.
Two quick questions:
1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym? 

I have only used ROS. The design is straight-forward.  If a ROS event comes in (some face is perceived; there is some loud noise, other environmental change) there is a python snippet (ROS is easiest to use with python) that converts that event into Atomese, and sends that Atomese to the cogserver (the cogserver is a network server, nothing more). So for example, a loud sound might be converted to `(StateLink (PredicateNode "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, processing does whatever you've set it up to do with this kind of information.  Exactly how sophisticated you want to be is up to you.

For output, it's even easier: `(cog-evaluate! (EvaluationLink (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments ...))))` which calls a python function "twiddle_ROS_message" to send some data somewhere in ROS.

My remarks about "excellent design" and "AGI" above means that python wrappers for converting ROS data to Atomese should be minimal, or that they should do just enough to bring in external information into the AtomSpace. You want to avoid a game of writing large, complex python scripts. So when you ask "How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?" The answer should be "about the same" and "not complicated" because there should be only minimalistic shims to convert to/from Atomese and the message formats these other systems use.  If you are creating something complicated in these systems, you are not doing AGI, you are doing robotics.


I saw the Python wrapper from ROS to Atomose (i used ROS with c++ in a Robotic course and yes, python is simpler) in the Eva forlder and are really minimal. Better like this, I'll try to keep the same wavelength. 
I have understand the interaction from ROS to Atomese. But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works? 

Again: scaffolding vs AGI. So, 3D location is part of the external world, and the scaffolding must interface to the external world, and take 3D data and convert it into a format that the AGI code can operate on.  If you have AGI code that can work directly with 3D point clouds, then great! No scaffolding is needed! If you (like me) have proto-AGI code that wants to work with symbolic-natural-language, then some scaffolding is needed to convert point-clouds into prepositions.  Some day in the future, maybe we can remove some of the scaffolding.

However, up until now, almost all work that has been done, that is being done, is on scaffolding. If you are not careful, you will find yourself doing the same. This is not bad: it's educational, and it's important, and it helps show where the boundary is between the scaffolding and the AGI. -- if nothing else, this is called "learning at the school of hard knocks" -- "I built one and it didn't work, but I learned something". At the forefront of knowledge, that's the only school that is open. That's what science is.


Ideally, is there an AGI code idea that works directly with pointcloud 3D? I also suppose that working with symbolic-natural-language and so propositions is more efficient! Point clouds are heavy and it takes a lot of work to extract information, so why would we want this?
 
I'll pay attention to the boundary between scaffolding and AGI but I'll have to try it first-hand to really understand what we are talking about.


Reasoning and inference is a very dangerous place to start, and may kill your project before it even gets started. There are several reasons for this.

 I'm feeling it!

* Reasoning presumes that you have already decided on a representation for your data (either hand-crafted it, or automatically learned, somehow.) Once you have this representation, then you can reason on it. But do you have this representation? No, you don't. You might borrow one from blocks-world, or borrow the one from Eva, or borrow the one from rocca (or the one from agi-bio, which represents DNA, RNA and proteins).  You then have the problem of pulling external data and placing it into your representation, where "external data" is vision, sound, text, or RNA/DNA genetic sequences. This is scaffolding. 
* Reasoning presumes that you have inference rules. Where did these come from? Did you hand-craft them? PLN has a bunch of hand-crafted inference rules that Ben and friends hand-crafted 10-15 years ago, and Nil has carefully implemented in C code. They work, kind-of, whenever you have a hand-crafted representation for your data that is PLN-compatible. Nil spends a lot of time, a huge amount of time (the last 10 years) getting the hand-crafted rules to fit with the hand-crafted representation, and to get reasoning working efficiently and quickly. But if your representation does not fit the PLN structure, then it won't work.  (None of my language work was ever able to fit with PLN. My new AGI work (at opencog/learn) will almost surely not fit with PLN; the goal there is to learn brand-new inference rules, instead of using the hand-crafted ones.)

* The actual implementation of the URE is "hard-core comp-sci" or maybe "good old-fashioned comp sci": its a set of algorithms to apply some rewrite rules to a network. There are many non-opencog systems that do something similar, such as SAT-solvers, constraint satisfaction systems, ASP-answer-set programming, the "lambda cube", higher-order logic, theorem-proving systems, etc. It's hard core, it's not easy.  Many of these systems are much much faster, and are much more flexible, *if* your data representation is not PLN, but is something else: e.g. boolean expressions or prolog-like assertions. So we are back again to "what is your internal model"?

For example, in robotics, for a robot inside an office building, a common inference task is "is the door open? If the door is open then roll through it, else grasp the door handle and open the door."  The standard grad-school robotics approach to solve this is to use ROS or something similar to "see" the door, and then to use ASP (answer-set programming) to perform very fast crisp-logic reasoning and inference. It works. It's what 90% of all university robotics departments use. It is reasoning and inference. It's not AGI.

I don't think the following is exactly a representation of the data, but...
I thought I was starting with a trivial representation, 
objects are described by (ConceptNode "English-object-name"), 
primitive robot actions by GroundedPredicateNode which call py functions that actually perform those actions via ROS.
A vision algorithm recognizes certain objects and returns the English name and their 3D coordinates.
The robot receives goals to complete via English sentences with Relex2Logic. Once the inference rules are written, the robot tries to solve the goals. When it doesn't know what to do, it tries randomly and ramps up its KB from the sensors and continues to make inferences.

The following is an example in a very Pseudo-language.
That's what my mind thinks when planning the resolution of this problem.
It certainly has many wrong ideas, concepts, ways of doing and dealing with things...
What are the critical errors that I've made?
What are the main differences from Eva?


Atomspace:
  Concepts: "name" - "3D pose"
  - bottle - Na
  - table - Na
  (Predicate: "over" List ("bottle") ("table"))
  Actions:
  - Go random
  - Go to coord
  - Grab obj

Goal: (bottle in hand)    // = grab bottle

Inference rules: all the necessary rules, i.e.
* grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj in hand) ...
* coord-rule: if x is in "coord1" and y is over x then y is in "coord1"

-> So, robot try backward chaining to find the behavior tree to run. It doesn't find it, it lacks knowledge, it doesn't know where the bottle is (let's leave out partial trees).
-> Go random ...
-> Vision sensor recognizes table
-> atomspace update: table in coord (1,1,1)
-> forward chaining -> bottle in coord (1,1,1)
-> backward chaining finds a tree, that is
Go to coord (1,1,1) + Grap obj
-> goal achieved
 

* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.

For Eva, the self-model and world-model are all part of the same thing, and they were hand-crafted (not learned).  The goal was to interface language to movement and perception. The inspiration was to use concepts and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.

Getting this to work involved a sequence of rickety and fragile transformations: from sound to text (via google voice-to-text) which is inaccurate. From text to a parse-tree (via link-grammar). From parse-tree to the internal model. From the internal model to robot motion/action. Changing anything anywhere was both conceptually hard (no one else understood what the heck I was doing, including, among others, "the management" (Ben and David) and without management support, the going gets tough.)  Also, it was abstract enough and complex enough that other programmers were unwilling to learn how it worked, and so were unwilling to help.  If you personally  want to work on this, then be aware that it is abstract and complex. And fragile. (Part of the goal of "good engineering" is to compartmentalize the complexity so that it becomes "easy to use" and non-fragile. This code bases needed a little bit more "good engineering" than it ever got.)

My goal with the opencog/learn project is to automate all of the above, including the reasoning, inference, and world-model, but it is far away from that, so far. I think I know how to do these things, but now I have to ... do them.

-- Linas

I haven't looked at the Meaning-Text Theory yet (serious I think!) I'll fix it!
What I have described has precisely this direction it seems to me, but it was still only an idea. I can still change, I will have to speak to my supervisors to also evaluate the new possibilities you have shown me!
In the meantime, thanks to everyone, my knowledge base is improving a lot too!

Michele
 

Michele Thiella

unread,
Mar 24, 2021, 9:53:49 AM3/24/21
to opencog
Hi Nill,
first of all thank you, it would be a pleasure to have you as a semi-mentor.
Until now, I'm autonomous and it seems that it's working for me too.
ROCCA is of great interest to me, as I replied to Linas, I realized that my initial idea was a little different, more general and less innovative. Build scaffolding!
It may be one direction, but ROCCA may be a better one!
A call might be the best solution, as well as a great opportunity for me.
My spoken English is not the best, but it will be a way to improve that too.
Unfortunately, this week I'm busy with other university things, could it be one of the next Fridays?
Meanwhile, I'll try to get a clearer idea of all these news.

Thanks so much again

Michele

Michele Thiella

unread,
Mar 24, 2021, 10:37:25 AM3/24/21
to opencog
Il giorno mercoledì 24 marzo 2021 alle 01:46:51 UTC+1 gent...@gmail.com ha scritto:
Just to throw in my 3 cents here, I have done some work on moving ROCCA to use the MineRL’s Gym API to access Minecraft, instead of using a separate Malmo wrapper.
It works in the sense that I can run the example code in that way and it simplifies the whole design somewhat, as interfacing with Minecraft is reduced to relying on a familiar structure of a Gym API.
I also have a setup where I can run the whole thing in a Docker container, so it’s almost reproducible (there is a minor file edit needed).

An idea that I wrote about on Discord was to bring in some unsupervised image segmentator (like MONet), train it on the MineRL dataset (they have a dataset of Minecraft traces) and then use that to have some
information about visual objects available to the agent. For now I am stuck a bit though, as sadly the loading code they provided for their dataset has memory leak issues, so I will have to write my own, it shouldn’t prove too difficult as 
I just need the video frames but I have to get around to implement it yet.

Hi Adrian, pleased to meet you.

I understand that I'll switch to Discord!

I don't know MONet yet, I'll take a look.
This scope appears, at least initially, to be based on vision. For my little experience I hated the vision, even if it is essential. Maybe the most interesting thing is the inference. It's a new science to me but it's damn challenging and interesting. But it was a great idea to use Docker containers, they make sharing a lot easier. I got a Minecraft account to test some code but in the end I had given up.
I will keep updated on Discord!
 
Michele

Linas Vepstas

unread,
Mar 24, 2021, 3:11:49 PM3/24/21
to opencog
Hi Michele,

On Wed, Mar 24, 2021 at 8:50 AM Michele Thiella <acikoa...@gmail.com> wrote:
ok, lot of things here! 
I'm trying to learn more than 10 years of your work in few month. 

You are avoiding 10 years of confusion and mistakes. Anyway, later in life, you may find time to stop and smell the flowers. But the standard academic trajectory is to push you as fast as possible to the very edge of what is known, and do research there.


Il giorno martedì 23 marzo 2021 alle 18:10:24 UTC+1 linas ha scritto:
Oh, please let me kill you! That's where all the fun is!  Based on discussions with many people, there is a wide-spread misunderstanding of what AGI is or how it might be achieved. Although what you said is superficially, simplistically correct, I want to point out that "excellence" cannot be achieved by hand-crafting knowledge bases. Very few people seem to understand this, and seem to believe that somehow just slapping a bunch of parts together will result in AGI. That designing AGI is like designing an airplane, that it's just a matter of "excellent design" and it will fly by itself. This is not the case.

Thus, I was trying to be careful in distinguishing the "scaffolding", which is hand-crafted, from actual AGI type work. The scaffolding is needed to bring data into a format where an AGI type system can interface with it.  At every point of design, you have to ask: is this piece of code just some more hand-crafted (human-crafted) special-case code that is being used to convert the external world into a form that a computer algorithm can interact with? Or is this piece of code "AGI" (or as close to AGI as we can get right now)?  So I am trying to draw a contrast between "those things that are AGI" and "ancillary support services".

I'm starting to understand what you mean. 
Probably all the code that i'm thinking is "scaffolding". So, "bring data into a format where an AGI type system can interface with it". 
Maybe it's not clear to me what an AGI-code is. I had seen /learn and /generate only from readme, maybe because i found them hard at that time (and at this time, for sure). 
I agree that hand-crafting KB and "execellence" don't mix. But so, your proposal would be to achieved "execellence" by a knowledge base built by what?

The distinction between AGI and scaffolding is not clear. But I can illustrate with an example.

Link Grammar is a natural-language parser, for English and other languages. It consists of two parts: the parser itself, which embodies a theory of natural language, and a lexis, or dictionary, that encodes the actual grammar for different languages. (there one for English, one for Russian, and another 8+ demo dictionaries.)  It was created in the 1990's, and was "cutting edge" back then -- it was at the forefront of computational linguistics research.

There are other parsers as well, created in a similar timeframe, having a similar division: a generic algorithm, and a hand-crafted dictionary encoding a specific language.  For this discussion, the dictionary can be called a "knowledge base"

One goal of "true AGI" would be to automatically learn that dictionary. There have been attempts to do this in the 1990's, and probably earlier, and continue on to this day, with varying levels of activity and theories. And obviously, most of the neural-net crowd has given up symbolic AI, although they are trying to learn a collection of weight-vectors. So, although the neural-net people don't have/use a parser, they do have a "knowledge base". (unfortunately, its a black box - its a collection of floating-point numbers, with no idea of what they mean)

So, naively, simplistically, a "true AGI system" should be capable of learning the "knowledge base", instead of relying on humans to craft one.

Now comes the blurry parts: the learning algorithm itself is hand-crafted, so isn't that a form of cheating? We are once again relying on humans to do the work. For example, for neural nets, effectively all of them are trained on a selection of images curated by human beings. The neural net learns how to recognize a photo of a horse, but it was trained on a human-curated training set. So, again, that's "cheating". Have you heard the expression "its turtles all the way down"? Well, for neural nets, its hand-crafted datasets all the way down. It's people all the way down. The goal of building a true AGI is to avoid this.  Step 1 is to avoid hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms.  I'm working on Step 1. I suppose that Step 2 is beyond the abilities of what can be done today. It's a bit blurry.
 
I had seen the beginning of the work and it is very interesting. In the next few days I will look at the current state.
Two quick questions:
1) How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym? 

I have only used ROS. The design is straight-forward.  If a ROS event comes in (some face is perceived; there is some loud noise, other environmental change) there is a python snippet (ROS is easiest to use with python) that converts that event into Atomese, and sends that Atomese to the cogserver (the cogserver is a network server, nothing more). So for example, a loud sound might be converted to `(StateLink (PredicateNode "ambient sound") (ConceptNode "loud sound"))` Then, on the opencog side, processing does whatever you've set it up to do with this kind of information.  Exactly how sophisticated you want to be is up to you.

For output, it's even easier: `(cog-evaluate! (EvaluationLink (GroundedPredicateNode "py:twiddle_ROS_message") (ListLink ... arguments ...))))` which calls a python function "twiddle_ROS_message" to send some data somewhere in ROS.

My remarks about "excellent design" and "AGI" above means that python wrappers for converting ROS data to Atomese should be minimal, or that they should do just enough to bring in external information into the AtomSpace. You want to avoid a game of writing large, complex python scripts. So when you ask "How complicated is it to work directly with Ros + Gazebo compared to Malmo and Gym?" The answer should be "about the same" and "not complicated" because there should be only minimalistic shims to convert to/from Atomese and the message formats these other systems use.  If you are creating something complicated in these systems, you are not doing AGI, you are doing robotics.


I saw the Python wrapper from ROS to Atomose (i used ROS with c++ in a Robotic course and yes, python is simpler) in the Eva forlder and are really minimal.

Well, also, someone reorganized the github repos, and most of that code was moved somewhere else ... and then, after it got to it's new home, it was cut down, ... this is one reason why things may not work. Some parts might have been lost in the move.


But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works? 

It is not automatic.  You have to `cog-execute!` whatever code you want to trigger. There are several ways of doing this.
1) by hand .. obviously.
2) write some sheme or some python code that loops over whatever needs to be looped over, searching for high or low STI or any other Value or StateLink or whatever might be changing, and call cog-execute! as needed.
3) Do the above, but entirely in Atomese.  There is a way to do an infinite loop in atomese -- it is actually a tail-recursive call to the function itself. It should be possible to do everything you need to do in "pure atomese". Now, atomese was never meant to be a full-scale programming language, like python or scheme, so it is missing many commonplace ideas that make python/scheme/c++/etc. "human friendly". But it does have enough to make most things possible, and many things "easy" (-ish)

The Eva/Sophia code did version 3. I forget where the main loop is; its only 3 lines of code total, so its easy to miss. It might be in one of the repos that was moved around. Everything else was controlled by SequentialAndLinks, which stepped through a tree of decisions, triggering a GroundedPredicate whenever some condition was met.

There were three design goals:
a) Make sure atomese had everything it needed to control a robot
b) Make sure that the atomese was simple enough that other algorithms could analyze it and modify it. For example, it should be possible (in principle) for MOSES or URE or PLN or some other system to analyze and modify the robot-control code. (in practice, this was never done) 

Keeping the robot code in the form of a decision tree should mean that it is simple enough that other systems could analyze that tree, edit that tree, modify it, extend it, and thus create brand-new robot behaviors out of "thin air".

c) Make sure that the design of atomese itself was simple enough and usable enough to allow a) and b) above. This is an ongoing project.


Again: scaffolding vs AGI. So, 3D location is part of the external world, and the scaffolding must interface to the external world, and take 3D data and convert it into a format that the AGI code can operate on.  If you have AGI code that can work directly with 3D point clouds, then great! No scaffolding is needed! If you (like me) have proto-AGI code that wants to work with symbolic-natural-language, then some scaffolding is needed to convert point-clouds into prepositions.  Some day in the future, maybe we can remove some of the scaffolding.

However, up until now, almost all work that has been done, that is being done, is on scaffolding. If you are not careful, you will find yourself doing the same. This is not bad: it's educational, and it's important, and it helps show where the boundary is between the scaffolding and the AGI. -- if nothing else, this is called "learning at the school of hard knocks" -- "I built one and it didn't work, but I learned something". At the forefront of knowledge, that's the only school that is open. That's what science is.


Ideally, is there an AGI code idea that works directly with pointcloud 3D?

Ideally, there is AGI code that can see and listen, can sense true magnetic North, swim in the ocean, etc. so sure, of course.

I also suppose that working with symbolic-natural-language and so propositions is more efficient! Point clouds are heavy and it takes a lot of work to extract information,

Yes.
so why would we want this?

Because, eventually, it needs to be something that happens. And just because I don't know how to do this today does not mean that someone clever won't be able to figure out how to process a point-cloud and discern shapes in it. I suppose someone at Microsoft or at Tesla is already doing something along those lines.

 
Reasoning and inference is a very dangerous place to start, and may kill your project before it even gets started. There are several reasons for this.

 I'm feeling it!

* Reasoning presumes that you have already decided on a representation for your data (either hand-crafted it, or automatically learned, somehow.) Once you have this representation, then you can reason on it. But do you have this representation? No, you don't. You might borrow one from blocks-world, or borrow the one from Eva, or borrow the one from rocca (or the one from agi-bio, which represents DNA, RNA and proteins).  You then have the problem of pulling external data and placing it into your representation, where "external data" is vision, sound, text, or RNA/DNA genetic sequences. This is scaffolding. 
* Reasoning presumes that you have inference rules. Where did these come from? Did you hand-craft them? PLN has a bunch of hand-crafted inference rules that Ben and friends hand-crafted 10-15 years ago, and Nil has carefully implemented in C code. They work, kind-of, whenever you have a hand-crafted representation for your data that is PLN-compatible. Nil spends a lot of time, a huge amount of time (the last 10 years) getting the hand-crafted rules to fit with the hand-crafted representation, and to get reasoning working efficiently and quickly. But if your representation does not fit the PLN structure, then it won't work.  (None of my language work was ever able to fit with PLN. My new AGI work (at opencog/learn) will almost surely not fit with PLN; the goal there is to learn brand-new inference rules, instead of using the hand-crafted ones.)

* The actual implementation of the URE is "hard-core comp-sci" or maybe "good old-fashioned comp sci": its a set of algorithms to apply some rewrite rules to a network. There are many non-opencog systems that do something similar, such as SAT-solvers, constraint satisfaction systems, ASP-answer-set programming, the "lambda cube", higher-order logic, theorem-proving systems, etc. It's hard core, it's not easy.  Many of these systems are much much faster, and are much more flexible, *if* your data representation is not PLN, but is something else: e.g. boolean expressions or prolog-like assertions. So we are back again to "what is your internal model"?

For example, in robotics, for a robot inside an office building, a common inference task is "is the door open? If the door is open then roll through it, else grasp the door handle and open the door."  The standard grad-school robotics approach to solve this is to use ROS or something similar to "see" the door, and then to use ASP (answer-set programming) to perform very fast crisp-logic reasoning and inference. It works. It's what 90% of all university robotics departments use. It is reasoning and inference. It's not AGI.

I don't think the following is exactly a representation of the data, but...
I thought I was starting with a trivial representation, 
objects are described by (ConceptNode "English-object-name"), 
primitive robot actions by GroundedPredicateNode which call py functions that actually perform those actions via ROS.
A vision algorithm recognizes certain objects and returns the English name and their 3D coordinates.
The robot receives goals to complete via English sentences with Relex2Logic. Once the inference rules are written, the robot tries to solve the goals. When it doesn't know what to do, it tries randomly and ramps up its KB from the sensors and continues to make inferences.

The following is an example in a very Pseudo-language.
That's what my mind thinks when planning the resolution of this problem.
It certainly has many wrong ideas, concepts, ways of doing and dealing with things...

Yes, more or less.

What are the critical errors that I've made?
What are the main differences from Eva?

I did not use relex2logic. That was designed for something else.

Before reasoning is possible, one must have a world-model. This model has several parts to it:
* The people in the room, and their 3D coordinates
* The objects on the table and their 3D coordinates.
* The self-model (current position of robot, and of its arms, etc.)
The above is updated rapidly, by sensor information.

Then there is some long-term knowledge:
* The names of everyone who is known. A dictionary linking names to faces.

Then there is some common-sense knowledge:
* you can talk to people,
* you can pick up bottles on a table
* you cannot talk to bottles
* you cannot pick up people.
* bottles can be picked up with the arm.
* facial expressions and arm movements can be used to communicate with people.

The world model needs to represent all of this. It also needs to store all of the above in a representation that is accessible to natural language, so that it can talk about the position of its arm, the location of the bottle, and the name of the person it is talking to.

Reasoning is possible only *after* all of the above has been satisfied, not before.  Attempts to do reasoning before the above has been built will always come up short, because some important piece of information will be missing, or will be stored somewhere, in some format that the reasoning system does not have access to it.

The point here is that people have been building "reasoning systems" for the last 30 or 40 years. They are always frail and fragile. They are always missing key information.  I think it is important to try to understand how to represent information in a uniform manner, so that reasoning does not stumble.



Atomspace:
  Concepts: "name" - "3D pose"
  - bottle - Na
  - table - Na
  (Predicate: "over" List ("bottle") ("table"))
  Actions:
  - Go random
  - Go to coord
  - Grab obj

Goal: (bottle in hand)    // = grab bottle

Inference rules: all the necessary rules, i.e.
* grab-rule: preconditions: (robot-coord = obj-coord) ..., effects: (obj in hand) ...
* coord-rule: if x is in "coord1" and y is over x then y is in "coord1"

-> So, robot try backward chaining to find the behavior tree to run. It doesn't find it, it lacks knowledge, it doesn't know where the bottle is (let's leave out partial trees).
-> Go random ...
-> Vision sensor recognizes table
-> atomspace update: table in coord (1,1,1)
-> forward chaining -> bottle in coord (1,1,1)
-> backward chaining finds a tree, that is
Go to coord (1,1,1) + Grap obj
-> goal achieved

This is a more-or-less textbook robotics homework assignment. It has certainly been solved in many different ways by many different people using many different technologies, over the last 40-60 years. Algorithms like A-star search are one of the research results of trying to solve the above. The AtomSpace would be a horrible technology to solve the above problem, its too slow, too bulky, too complicated.

The chaining steps can be called "inference", but it is inference devoid of natural language, devoid of "true understanding". My goal is to have a conversation with the robot:

"What do you see?"
"A bottle"
"where is it?"
"on the table"
"can you reach it?"
"no"
"could you reach it if you move to a different place?"
"yes"
"where would you move?"
"closer to the bottle"
"can you please move closer to the bottle?"
(robot moves)

This can be solved by carefully hand-crafting a chatbot dialog tree. (The ghost chatbot system in opencog was designed to allow such dialog trees to be created) Over the decades, many chatbots have been written. Again: there are common problems:

-- the text is hard-coded, and not linguistic.  Minor changes in wording cause the chatbot to get confused.
-- there is no world-model, or it is ad hoc and scattered over many places
-- no ability to perform reasoning
-- no memory of the dialog ("what were we talking about?" - well, chatbots do have a one-word "topic" variable, so the chatbot can answer "we are talking about baseball", but that's it. There is no "world model" of the conversation, and no "world model" of who the conversation was with ("On Sunday, I talked to John about a bottle on a table and how to grasp it")

Note that ghost has all of the above problems. It's not linguistic, it has no world-model, it has no defined representation that can be reasoned over, and it has no memory.

20 years ago, it was hard to build a robot that could grasp a bottle. It was hard to create a good chatbot.

What is the state of the art, today? Well, Tesla has self-driving cars, and Amazon and Apple have chatbots that are very sophisticated.  There is no open source for any of this, and there are no open standards, so if you are a university grad student (or a university professor) it is still very very hard to build a robot that can grasp a bottle, or a robot that you can talk to.  And yet, these basic tasks have become "engineering"; they are no longer "science".  The science resides at a more abstract level.

--linas

 

* Ideally my goal was to extend the "model of the world" to work more with objects than people and to extend the "self-model" to execute navigation and manipulation plans. In all of this, I haven't yet explored the learning.

For Eva, the self-model and world-model are all part of the same thing, and they were hand-crafted (not learned).  The goal was to interface language to movement and perception. The inspiration was to use concepts and ideas from Melcuk's "Meaning-Text Theory" (MTT) for the world-model.

Getting this to work involved a sequence of rickety and fragile transformations: from sound to text (via google voice-to-text) which is inaccurate. From text to a parse-tree (via link-grammar). From parse-tree to the internal model. From the internal model to robot motion/action. Changing anything anywhere was both conceptually hard (no one else understood what the heck I was doing, including, among others, "the management" (Ben and David) and without management support, the going gets tough.)  Also, it was abstract enough and complex enough that other programmers were unwilling to learn how it worked, and so were unwilling to help.  If you personally  want to work on this, then be aware that it is abstract and complex. And fragile. (Part of the goal of "good engineering" is to compartmentalize the complexity so that it becomes "easy to use" and non-fragile. This code bases needed a little bit more "good engineering" than it ever got.)

My goal with the opencog/learn project is to automate all of the above, including the reasoning, inference, and world-model, but it is far away from that, so far. I think I know how to do these things, but now I have to ... do them.

-- Linas

I haven't looked at the Meaning-Text Theory yet (serious I think!) I'll fix it!
What I have described has precisely this direction it seems to me, but it was still only an idea. I can still change, I will have to speak to my supervisors to also evaluate the new possibilities you have shown me!
In the meantime, thanks to everyone, my knowledge base is improving a lot too!

Michele
 

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
Mar 25, 2021, 3:03:13 PM3/25/21
to opencog
Hi Linas, 
I hope I'm not disturbing and wasting too much of your time!

Il giorno mercoledì 24 marzo 2021 alle 20:11:49 UTC+1 linas ha scritto:
So, naively, simplistically, a "true AGI system" should be capable of learning the "knowledge base", instead of relying on humans to craft one.

Now comes the blurry parts: the learning algorithm itself is hand-crafted, so isn't that a form of cheating? We are once again relying on humans to do the work. For example, for neural nets, effectively all of them are trained on a selection of images curated by human beings. The neural net learns how to recognize a photo of a horse, but it was trained on a human-curated training set. So, again, that's "cheating". Have you heard the expression "its turtles all the way down"? Well, for neural nets, its hand-crafted datasets all the way down. It's people all the way down. The goal of building a true AGI is to avoid this.  Step 1 is to avoid hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms.  I'm working on Step 1. I suppose that Step 2 is beyond the abilities of what can be done today. It's a bit blurry.

I begin to struggle not to say nonsense.

Thus, if i understand correctly your point 1, considering the README of the repo learn, the small idea is unsupervised learning for natural language; with the next goal of extending the domain from text to "all things in the world".
So, (still using unsupervised learning?) let the robot build its representation of the world through its observations.
Now, the input set is no longer hand-crafted but the algorithm still has the "turtles problem" .. Your point 2 would solve that too, right?
The only option I can think of is that AGI itself writes its algorithms .. but recursively AGI would have to invent itself ..

But then, without a "true-AGI" learning, I'll never have a "true-AGI" knowledge base and without that I'll not be able to continue, right?
Why work for point 1 if point 2 is a prerequisite?
It seems like a no-win situation. Maybe I'm just a pessimist!
There will be another way... In the end, our knowledge base was also helped by our parents in some way.


But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works? 

It is not automatic.  You have to `cog-execute!` whatever code you want to trigger. There are several ways of doing this.
1) by hand .. obviously.
2) write some sheme or some python code that loops over whatever needs to be looped over, searching for high or low STI or any other Value or StateLink or whatever might be changing, and call cog-execute! as needed.
3) Do the above, but entirely in Atomese.  There is a way to do an infinite loop in atomese -- it is actually a tail-recursive call to the function itself. It should be possible to do everything you need to do in "pure atomese". Now, atomese was never meant to be a full-scale programming language, like python or scheme, so it is missing many commonplace ideas that make python/scheme/c++/etc. "human friendly". But it does have enough to make most things possible, and many things "easy" (-ish)

The Eva/Sophia code did version 3. I forget where the main loop is; its only 3 lines of code total, so its easy to miss. It might be in one of the repos that was moved around. Everything else was controlled by SequentialAndLinks, which stepped through a tree of decisions, triggering a GroundedPredicate whenever some condition was met.
There were three design goals:
a) Make sure atomese had everything it needed to control a robot
b) Make sure that the atomese was simple enough that other algorithms could analyze it and modify it. For example, it should be possible (in principle) for MOSES or URE or PLN or some other system to analyze and modify the robot-control code. (in practice, this was never done) 

Keeping the robot code in the form of a decision tree should mean that it is simple enough that other systems could analyze that tree, edit that tree, modify it, extend it, and thus create brand-new robot behaviors out of "thin air".

c) Make sure that the design of atomese itself was simple enough and usable enough to allow a) and b) above. This is an ongoing project.


Ah ok now it makes a lot more sense! I really like solution 3).
I'm experimenting a little with the potential of Atomese (sometimes at random) but it's nice to write, I'm also learning scheme that I didn't know anything about.

Can I ask you to say something about tree of decisions in Eva? Was it a separate scheme/python module that analyzed SequentialAnd?
While i'm at it, I can't place some components in your architecture:
I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in practice what were they used for?
Finally, in practice what does PLN do/have more than URE?
This is now clear to me, but why natural language?
if i didn't want interactions with humans could i do it differently?
A certain variation of the sensor values already represents "the forward movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead. 
I don't understand why removing natural language implies having an inference devoid of "true understanding". 

Stupid example: If I speak Italian with a French, neither of us understands the other. But a bottle remains a bottle for both and if I give him my hand he will probably do it too ... or he will leave without saying goodbye.

I'm probably missing something big, but until I don't bang my head against it, I don't see.


This can be solved by carefully hand-crafting a chatbot dialog tree. (The ghost chatbot system in opencog was designed to allow such dialog trees to be created) Over the decades, many chatbots have been written. Again: there are common problems:

-- the text is hard-coded, and not linguistic.  Minor changes in wording cause the chatbot to get confused.
-- there is no world-model, or it is ad hoc and scattered over many places
-- no ability to perform reasoning
-- no memory of the dialog ("what were we talking about?" - well, chatbots do have a one-word "topic" variable, so the chatbot can answer "we are talking about baseball", but that's it. There is no "world model" of the conversation, and no "world model" of who the conversation was with ("On Sunday, I talked to John about a bottle on a table and how to grasp it")

Note that ghost has all of the above problems. It's not linguistic, it has no world-model, it has no defined representation that can be reasoned over, and it has no memory.

20 years ago, it was hard to build a robot that could grasp a bottle. It was hard to create a good chatbot.

What is the state of the art, today? Well, Tesla has self-driving cars, and Amazon and Apple have chatbots that are very sophisticated.  There is no open source for any of this, and there are no open standards, so if you are a university grad student (or a university professor) it is still very very hard to build a robot that can grasp a bottle, or a robot that you can talk to.  And yet, these basic tasks have become "engineering"; they are no longer "science".  The science resides at a more abstract level.

--linas

I find the abstract level incredible, both in terms of beauty and difficulty!

Michele
 

Nil Geisweiller

unread,
Mar 26, 2021, 3:26:36 AM3/26/21
to ope...@googlegroups.com, Adrian Borucki
On 3/24/21 2:46 AM, Adrian Borucki wrote:
> Just to throw in my 3 cents here, I have done some work on moving ROCCA
> to use the MineRL’s Gym API to access Minecraft, instead of using a
> separate Malmo wrapper.
> It works in the sense that I can run the example code in that way and it
> simplifies the whole design somewhat, as interfacing with Minecraft is
> reduced to relying on a familiar structure of a Gym API.
> I also have a setup where I can run the whole thing in a Docker
> container, so it’s almost reproducible (there is a minor file edit needed).

That's very cool, Adrian. BTW, anything that doesn't break the existing
examples and doesn't fundamentally change the design for the worst, you
should feel free to create a PR for it.

> An idea that I wrote about on Discord was to bring in some unsupervised
> image segmentator (like MONet), train it on the MineRL dataset (they
> have a dataset of Minecraft traces) and then use that to have some
> information about visual objects available to the agent. For now I am
> stuck a bit though, as sadly the loading code they provided for their
> dataset has memory leak issues, so I will have to write my own, it
> shouldn’t prove too difficult as
> I just need the video frames but I have to get around to implement it yet.

:+1:

Nil

P.S: BTW, I'm on discord now, my id is ngeiswei.

> Do you want to have a call (say Friday, as it's ROCCA day for me)? I
> could walk you through the code, to help you decide whether you want to
> work on it, or else work on Eva.
>
> Nil
>
> >
> >
> > Michele
> >
> > Il giorno martedì 23 marzo 2021 alle 06:19:25 UTC+1 Nil ha scritto:
> >
> > Forwarding to opencog as I forgot to reply-all.
> >
> > -------- Forwarded Message --------
> > Subject: Re: New user [was Re: [opencog-dev] Problem in atom
> deletion
> > from postgreSQL
> > Date: Tue, 23 Mar 2021 07:15:10 +0200
> > From: Nil Geisweiller <ngei...@gmail.com>
> > To: Michele Thiella <acikoa...@gmail.com>
> >
> > Hi Michele,
> >
> > I'm working on something that might be relevant to your work, see
> >
> > https://github.com/opencog/rocca
> <https://github.com/opencog/rocca> <https://github.com/opencog/rocca
> <https://github.com/opencog/rocca>>
> > https://github.com/singnet/rocca
> <https://github.com/singnet/rocca> <https://github.com/singnet/rocca
> <https://groups.google.com/d/msgid/opencog/e2f4037e-daa8-4308-99dd-74f41130745bn%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/e2f4037e-daa8-4308-99dd-74f41130745bn%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/171ac33d-856c-4be5-8815-a9c3ce522b2fn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/171ac33d-856c-4be5-8815-a9c3ce522b2fn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
Mar 26, 2021, 3:36:23 AM3/26/21
to ope...@googlegroups.com, Michele Thiella
On 3/24/21 3:53 PM, Michele Thiella wrote:
> My spoken English is not the best, but it will be a way to improve that too.

No problem, I'll adjust, by switching to a terrible French accent. :-)

> Unfortunately, this week I'm busy with other university things, could it
> be one of the next Fridays?

Sure, Fri 2 April then, I'm available from 8am to 3pm EET, then from 5pm
to the end of the night.

Let me know your timing and we can meet there

https://meet.jit.si/proto-agi

Nil

> Meanwhile, I'll try to get a clearer idea of all these news.
>
> Thanks so much again
>
> Michele
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
Mar 26, 2021, 3:56:11 AM3/26/21
to ope...@googlegroups.com, Michele Thiella
On 3/25/21 9:03 PM, Michele Thiella wrote:
> Can I ask you to say something about tree of decisions in Eva? Was it a
> separate scheme/python module that analyzed SequentialAnd?
> While i'm at it, I can't place some components in your architecture:
> I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in
> practice what were they used for?

MOSES is a program learner. In principle it could learn any program, in
practice it is mostly used to learn multivariable boolean functions (as
it doesn't work very well on anything else, so far anyway).

See for more info

https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search

> Finally, in practice what does PLN do/have more than URE?

The URE is a generic rewriting system, that needs a rule set to operate.

See for more info

https://wiki.opencog.org/w/Unified_rule_engine

Such rule set can be PLN, which has been specifically tailored to handle
uncertain reasoning

https://github.com/opencog/pln

or the Miner, which is has been tailored to find frequent subgraphs

https://github.com/opencog/miner

or more, though these are the two most used/mature.

Nil
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Michele Thiella

unread,
Mar 27, 2021, 6:12:02 AM3/27/21
to opencog
Hi Nil,

ok, i had already seen OpenCog wiki. I don't know why, but i didn't link URE with the remaining modules. I understand better now.

Linas wrote about "representation for data that is PLN-compatible", what is this compatibility based on?
Days ago I was looking at the PLN rules files in the /rules directory, but I seem to lack theoretical knowledge.
Is there any recommended book/paper to study before the code of PLN rules?

For the meeting, could it be at 11.30am EET?
In Italy, it would be 10.30 and unfortunately before I haven't network.
Let me know if it can fit. Thanks in advance!

Michele

Adrian Borucki

unread,
Mar 30, 2021, 6:03:33 PM3/30/21
to opencog
On Friday, 26 March 2021 at 08:36:23 UTC+1 Nil wrote:
On 3/24/21 3:53 PM, Michele Thiella wrote:
> My spoken English is not the best, but it will be a way to improve that too.

No problem, I'll adjust, by switching to a terrible French accent. :-)

> Unfortunately, this week I'm busy with other university things, could it
> be one of the next Fridays?

Sure, Fri 2 April then, I'm available from 8am to 3pm EET, then from 5pm
to the end of the night.

Let me know your timing and we can meet there

Can I join too? I won’t be a nuisance, I’ll just listen to what you guys will talk about :)
It would be good to make sure there aren’t any gaps in my own understanding.
I will adjust to your timing of course, no worries.

Nil Geisweiller

unread,
Mar 31, 2021, 8:31:41 AM3/31/21
to ope...@googlegroups.com, Adrian Borucki
On 3/31/21 1:03 AM, Adrian Borucki wrote:
> Can I join too?
Sure! You or anyone who wants to join can, once we agree on a timing.

Nil

>
>
> https://meet.jit.si/proto-agi <https://meet.jit.si/proto-agi>
>
> Nil
>
> > Meanwhile, I'll try to get a clearer idea of all these news.
> >
> > Thanks so much again
> >
> > Michele
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to opencog+u...@googlegroups.com
> > <mailto:opencog+u...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com>
>
> >
> <https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/accb1ff9-fb45-492b-b57f-91a77ca9d1fdn%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/3b3806ca-002d-4722-b172-526d9d8ae96fn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/3b3806ca-002d-4722-b172-526d9d8ae96fn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
Mar 31, 2021, 8:40:38 AM3/31/21
to ope...@googlegroups.com, Michele Thiella
Hi Michele,

On 3/27/21 12:12 PM, Michele Thiella wrote:
> Is there any recommended book/paper to study before the code of PLN rules?

Search for Probabilistic Logic Networks in

https://wiki.opencog.org/w/Background_Publications#Books_Directly_Related_to_OpenCog_AI

> For the meeting, could it be at 11.30am EET?

11:30am EET works for me. But maybe you mean 10:30am EET. With
daylight saving time it seems EET corresponds to Italy time. I'm not
sure so double check but anyway 10:30am Italy time works for me.

Nil
> https://github.com/opencog/pln <https://github.com/opencog/pln>
>
> or the Miner, which is has been tailored to find frequent subgraphs
>
> https://github.com/opencog/miner <https://github.com/opencog/miner>
> <https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Michele Thiella

unread,
Apr 1, 2021, 5:52:53 AM4/1/21
to opencog
Hi Nil,
you're right! currently EET corresponds to the Italian time!
Great, then I might be a few minutes late because I have a lesson first. But surely 10.45am EET can work!

Also for me, no problems for those who want to join! 
Thanks for the PLN link. See you tomorrow.

Michele

Douglas Miles

unread,
Apr 1, 2021, 10:09:09 AM4/1/21
to ope...@googlegroups.com
May I sit in on the meeting as a fly on the wall?
If so, when/how shall I connect? 

Thanks in advance!
Douglas Miles

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com.

Nil Geisweiller

unread,
Apr 1, 2021, 10:29:53 AM4/1/21
to ope...@googlegroups.com, Douglas Miles
Sure! The place is

https://meet.jit.si/proto-agi

the time is

10:45am EET

Unfortunately probably too early if you're in the US.

Michele, maybe we could do a last minute change to fit the US timezone
as well? With the risk of adding confusion though.

I'll try to record the call, BTW.

Nil
> <https://github.com/opencog/pln> <https://github.com/opencog/pln
> <https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Michele Thiella

unread,
Apr 1, 2021, 11:19:16 AM4/1/21
to opencog
Could it be around 9pm EET?
it's a completely different time but should it be available for everyone?

Michele

Douglas Miles

unread,
Apr 1, 2021, 1:29:39 PM4/1/21
to ope...@googlegroups.com
 9pm EET works for me.. is that 1.5 hours from now or 25.5 hours from now?

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/a2120141-b415-41f7-8b29-c7cba23db310n%40googlegroups.com.

Michele Thiella

unread,
Apr 1, 2021, 2:27:34 PM4/1/21
to opencog
25.5 from now, i hope :)

Nil Geisweiller

unread,
Apr 1, 2021, 3:55:12 PM4/1/21
to ope...@googlegroups.com, Michele Thiella
Alright, so the time is

Friday 2 Apr, 9pm EET (3pm EDT, if I'm correct)

and the place is

https://meet.jit.si/proto-agi

Everybody is invited.

Nil

On 4/1/21 9:27 PM, Michele Thiella wrote:
> 25.5 from now, i hope :)
>
> Il giorno giovedì 1 aprile 2021 alle 19:29:39 UTC+2 logi...@gmail.com ha
> scritto:
>
>  9pm EET works for me.. is that 1.5 hours from now or 25.5 hours
> from now?
>
> On Thu, Apr 1, 2021 at 8:19 AM Michele Thiella <acikoa...@gmail.com>
> wrote:
>
> Could it be around 9pm EET?
> it's a completely different time but should it be available for
> everyone?
>
> Michele
> Il giorno giovedì 1 aprile 2021 alle 16:29:53 UTC+2 Nil ha scritto:
>
> Sure! The place is
>
> https://meet.jit.si/proto-agi <https://meet.jit.si/proto-agi>
> <https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> >
> > --
> > You received this message because you are subscribed to
> the Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails
> from it, send
> > an email to opencog+u...@googlegroups.com
> > <mailto:opencog+u...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com>
>
> >
> <https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com?utm_medium=email&utm_source=footer>>.
>
>
> --
> You received this message because you are subscribed to the
> Google Groups "opencog" group.
>
> To unsubscribe from this group and stop receiving emails from
> it, send an email to opencog+u...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/a2120141-b415-41f7-8b29-c7cba23db310n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/a2120141-b415-41f7-8b29-c7cba23db310n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/c2a0817b-c4be-400a-927c-df5678b5af7bn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/c2a0817b-c4be-400a-927c-df5678b5af7bn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Michael Duncan

unread,
Apr 2, 2021, 10:09:14 AM4/2/21
to opencog
it looks like 2pm edt, i'll be there!

Linas Vepstas

unread,
Apr 2, 2021, 2:05:23 PM4/2/21
to opencog
Sorry for the late reply; I've been busy.

On Thu, Mar 25, 2021 at 2:03 PM Michele Thiella <acikoa...@gmail.com> wrote:
Hi Linas, 
I hope I'm not disturbing and wasting too much of your time!

Not at all. It allows me to practice explaining things. Knowing something is not all that useful, if you can't explain it to someone else!


Il giorno mercoledì 24 marzo 2021 alle 20:11:49 UTC+1 linas ha scritto:
So, naively, simplistically, a "true AGI system" should be capable of learning the "knowledge base", instead of relying on humans to craft one.

Now comes the blurry parts: the learning algorithm itself is hand-crafted, so isn't that a form of cheating? We are once again relying on humans to do the work. For example, for neural nets, effectively all of them are trained on a selection of images curated by human beings. The neural net learns how to recognize a photo of a horse, but it was trained on a human-curated training set. So, again, that's "cheating". Have you heard the expression "its turtles all the way down"? Well, for neural nets, its hand-crafted datasets all the way down. It's people all the way down. The goal of building a true AGI is to avoid this.  Step 1 is to avoid hand-crafted training sets. Step 2 is to avoid hand-crafted algorithms.  I'm working on Step 1. I suppose that Step 2 is beyond the abilities of what can be done today. It's a bit blurry.

I begin to struggle not to say nonsense.

Thus, if i understand correctly your point 1, considering the README of the repo learn, the small idea is unsupervised learning for natural language; with the next goal of extending the domain from text to "all things in the world".
 
Yes.

So, (still using unsupervised learning?) let the robot build its representation of the world through its observations.

Yes.

Now, the input set is no longer hand-crafted but the algorithm still has the "turtles problem" .. Your point 2 would solve that too, right?
The only option I can think of is that AGI itself writes its algorithms .. but recursively AGI would have to invent itself

Uhh....
The input I'm working with has never been hand-crafted -- the input text is just .. text.

The hand-crafting refers to training sets: thus, it is common in neural-net training to take 100 photos of a hamburger, and 100 photos of a horse, and have some grad student apply these labels - hamburger or horse - and use that as the training set. Eventually, the neural net learns to tell apart hamburgers and horses. (but that's all).

The analogous task in linguistics is to have a grad student go through a text corpus, and mark all the nouns and all the verbs.  Eventually, the machine-learning algo learns how to tell part nouns from verbs.

I'm trying to work with an input stream that has not been marked up - its just raw input, raw text.

The algorithm I'm using tries to stay as close as possible to "physics" -- using principles of entropy & probability to do it's work. Thus, hopefully, the algo is not biasing the results.

Sure, some futuristic day, a human-level AGI will be able to write code, but that is too futuristic to affect the curent design work.
 

But then, without a "true-AGI" learning, I'll never have a "true-AGI" knowledge base and without that I'll not be able to continue, right?

I don't understand the question.

Why work for point 1 if point 2 is a prerequisite?

It's not a pre-requisite!

It seems like a no-win situation. Maybe I'm just a pessimist!

I don't understand.

There will be another way... In the end, our knowledge base was also helped by our parents in some way.

? I don't understand what our parents have to do with this...

But there is a thing that i not completly understand: to activate a GroundedPredicateNode to execute a py function i can use its STI right? It should be automatic, how is it works? 

It is not automatic.  You have to `cog-execute!` whatever code you want to trigger. There are several ways of doing this.
1) by hand .. obviously.
2) write some sheme or some python code that loops over whatever needs to be looped over, searching for high or low STI or any other Value or StateLink or whatever might be changing, and call cog-execute! as needed.
3) Do the above, but entirely in Atomese.  There is a way to do an infinite loop in atomese -- it is actually a tail-recursive call to the function itself. It should be possible to do everything you need to do in "pure atomese". Now, atomese was never meant to be a full-scale programming language, like python or scheme, so it is missing many commonplace ideas that make python/scheme/c++/etc. "human friendly". But it does have enough to make most things possible, and many things "easy" (-ish)

The Eva/Sophia code did version 3. I forget where the main loop is; its only 3 lines of code total, so its easy to miss. It might be in one of the repos that was moved around. Everything else was controlled by SequentialAndLinks, which stepped through a tree of decisions, triggering a GroundedPredicate whenever some condition was met.
There were three design goals:
a) Make sure atomese had everything it needed to control a robot
b) Make sure that the atomese was simple enough that other algorithms could analyze it and modify it. For example, it should be possible (in principle) for MOSES or URE or PLN or some other system to analyze and modify the robot-control code. (in practice, this was never done) 

Keeping the robot code in the form of a decision tree should mean that it is simple enough that other systems could analyze that tree, edit that tree, modify it, extend it, and thus create brand-new robot behaviors out of "thin air".

c) Make sure that the design of atomese itself was simple enough and usable enough to allow a) and b) above. This is an ongoing project.


Ah ok now it makes a lot more sense! I really like solution 3).

I do too!  It is certainly one of the things that makes the AtomSpace distinct from anything else out there.  There are plenty of graph databases these days. They just can't do this.

I'm experimenting a little with the potential of Atomese (sometimes at random) but it's nice to write, I'm also learning scheme that I didn't know anything about.

The primary benefit of scheme is that it is functional programming, and learning how to code in a functional programming language completely changes your world-view of what a program is, and what software is.  If you only know C/C++/java/python, then you have a very narrow, very restricted view of the world. You're missing a large variety of important concepts in software. Yes, learning functional programming is "good for you".

Can I ask you to say something about tree of decisions in Eva? Was it a separate scheme/python module that analyzed SequentialAnd?

No, it was just plain Atomese.

Many Atoms have an execute method (actuall, all Atoms have an execute method, but it is non-trivial on only some of them.)

The execute method on SequentialAnd simply steps through each Atom in it's outgoing set, and asks "are you true?" -- by calling execute, and seeing if it returns "true". If some atom in the outgoing list returns "false", then SequentialAnd stops and returns false. Otherwise, it continues till it reaches the end of the list, and then returns true.

There is no "external module" to perform this analysis.

While i'm at it, I can't place some components in your architecture:
I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in practice what were they used for?

I used MOSES to analyze medical notes from a hospital (free-text doctor and nurses notes) and predict patient outcomes. Some other people used MOSES to try to predict the stock market. Ben/Nill used it to hunt down genes that correlate with long life.

OpenPsi was used as an inspiration for a kind-of combined prioritization-plus-human-emotion-modelling system. It was, still is problematic, for failing to separate these two ideas. There are many practical problems in AtomSpace applications that lead to a combinatorial explosion of possibilities, and one part of open-psi seems to be effective in deciding which of these possibilities should be explored first.  Unfortunately, the design combined it with a really terrible model of human psychology, and this lead to a mass of confusion that was never fully resolved. it doesn't help that the creator of micro-psi came back and said that open-psi has no resemblance to micro-psi whatsoever. There are some good ideas in there, but the implementation remains problematic.
 
Finally, in practice what does PLN do/have more than URE?

I suppose Nil answered this already, but ... PLN defines a certain specific set of truth-value formulas. URE doesn't care about truth value formulas.

URE can chain together rules, -- arbitrary collections of rules. PLN is a specific collection of rules, and they are not only specific rules, but they are coupled with specific formulas for determining the truth value.

So, for example, consider chaining implications: If A implies B and B implies C then A implies C. This is a "rule" that recognizes an input of two pairs (A,B) and (B,C), and creates the pair (A,C) if the truth of A is T. it marks the truth of C as being T. A variant of this is Bayesian deduction, where the truth values are replaced by conditional probabilities.

URE doesn't care what kind of rule it is, or what happens to the truth values. The rules could be non-sense, and the formulas could be crazy, and URE would still try to chain them.
If your machine is incapable of talking, it would be hard to argue that it's smart. Now, dogs, cats, crows and octopi can't talk, and for centuries, some people (many people) believed they weren't smart. Well, now I think we all know better, but still, the best way to prove how smart or stupid you are is to open your mouth.
 
if i didn't want interactions with humans could i do it differently?

Well, you could build a self-driving car.  But I don't think Elon Musk is claiming that FSD is AGI.

A certain variation of the sensor values already represents "the forward movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead. 
I don't understand why removing natural language implies having an inference devoid of "true understanding". 

You know the expression "writing about music is like dancing about architecture"? Well, you could build a robot that dances, but you would have a hard time convincing anyone that its smart, that it's anything other than a clever puppet.
 

Stupid example: If I speak Italian with a French, neither of us understands the other. But a bottle remains a bottle for both and if I give him my hand he will probably do it too ... or he will leave without saying goodbye.

It's all very contextual. If you speak Italian, and you see a human, you assume that what you see has all the other properties of being a human. If you speak Italian, and you see a robot with a mechanical arm, you assume that it has all the typical properites of a robot: stupid and lifeless, just a machine.

-- Linas

I'm probably missing something big, but until I don't bang my head against it, I don't see.


This can be solved by carefully hand-crafting a chatbot dialog tree. (The ghost chatbot system in opencog was designed to allow such dialog trees to be created) Over the decades, many chatbots have been written. Again: there are common problems:

-- the text is hard-coded, and not linguistic.  Minor changes in wording cause the chatbot to get confused.
-- there is no world-model, or it is ad hoc and scattered over many places
-- no ability to perform reasoning
-- no memory of the dialog ("what were we talking about?" - well, chatbots do have a one-word "topic" variable, so the chatbot can answer "we are talking about baseball", but that's it. There is no "world model" of the conversation, and no "world model" of who the conversation was with ("On Sunday, I talked to John about a bottle on a table and how to grasp it")

Note that ghost has all of the above problems. It's not linguistic, it has no world-model, it has no defined representation that can be reasoned over, and it has no memory.

20 years ago, it was hard to build a robot that could grasp a bottle. It was hard to create a good chatbot.

What is the state of the art, today? Well, Tesla has self-driving cars, and Amazon and Apple have chatbots that are very sophisticated.  There is no open source for any of this, and there are no open standards, so if you are a university grad student (or a university professor) it is still very very hard to build a robot that can grasp a bottle, or a robot that you can talk to.  And yet, these basic tasks have become "engineering"; they are no longer "science".  The science resides at a more abstract level.

--linas

I find the abstract level incredible, both in terms of beauty and difficulty!

Michele
 

Dave Xanatos

unread,
Apr 2, 2021, 2:07:14 PM4/2/21
to ope...@googlegroups.com

Looks like it is 3pm EDT, fyi.  See you then 😊

Michael Duncan

unread,
Apr 2, 2021, 2:18:31 PM4/2/21
to opencog
yes, the website i linked to is incorrect, sorry about that!

Dave Xanatos

unread,
Apr 2, 2021, 5:37:46 PM4/2/21
to ope...@googlegroups.com

The link for the recording *WILL* be at https://xanatos.com/downloads/OpenCogNil20210402.mp4

 

Note the filesize is 598mB (1920 x 1080).  It is uploading now and I live in the boondocks (which means we only have DSL here…  it'll take a while.)  Try it Saturday morning, it will be ready by then (probably ready by 8pm, EDT USA, but I believe in conservative estimates 😊 ).

 

Thanks Nils and all, I got some valuable ideas out of our conversation.

 

Good weekend to all,

 

Dave

 

 

 

From: ope...@googlegroups.com <ope...@googlegroups.com> On Behalf Of Michael Duncan
Sent: Friday, April 2, 2021 10:09 AM
To: opencog <ope...@googlegroups.com>
Subject: Re: AGI & Robotics & Sophia [was Re: New user [was Re: [opencog-dev] Problem in atom deletion from postgreSQL

 

it looks like 2pm edt, i'll be there!

Douglas Miles

unread,
Apr 3, 2021, 12:52:10 AM4/3/21
to ope...@googlegroups.com
For those waiting, just now I was able to download David's recording.

Again, great talk Nil!

xan...@xanatos.com

unread,
Apr 3, 2021, 2:55:22 PM4/3/21
to Douglas Miles, ope...@googlegroups.com
Glad to hear, hope it works well for all,

Dave


------ Original message------
From: Douglas Miles
Date: Sat, Apr 3, 2021 00:52
Cc:
Subject:Re: AGI & Robotics & Sophia [was Re: New user [was Re: [opencog-dev] Problem in atom deletion from postgreSQL

For those waiting, just now I was able to download David's recording.

Again, great talk Nil!

On Fri, Apr 2, 2021 at 2:37 PM Dave Xanatos  wrote:

> The link for the recording **WILL** be at
> https://xanatos.com/downloads/OpenCogNil20210402.mp4
>
>
>
> Note the filesize is 598mB (1920 x 1080).  It is uploading now and I live
> in the boondocks (which means we only have DSL here…  it'll take a while.)
> Try it Saturday morning, it will be ready by then (probably ready by 8pm,
> EDT USA, but I believe in conservative estimates 😊 ).
>
>
>
> Thanks Nils and all, I got some valuable ideas out of our conversation.
>
>
>
> Good weekend to all,
>
>
>
> Dave
>
>
>
>
>
>
>
> *From:* ope...@googlegroups.com  *On Behalf Of
> *Michael Duncan
> *Sent:* Friday, April 2, 2021 10:09 AM
> *To:* opencog 
> *Subject:* Re: AGI & Robotics & Sophia [was Re: New user [was Re:
> [opencog-dev] Problem in atom deletion from postgreSQL
>
>
>
> it looks like 2pm edt, i'll be there!
>
> https://time.is/EET
>
>
>
>
>
> On Thursday, April 1, 2021 at 3:55:12 PM UTC-4 Nil wrote:
>
> Alright, so the time is
>
> Friday 2 Apr, 9pm EET (3pm EDT, if I'm correct)
>
> and the place is
>
> https://meet.jit.si/proto-agi
>
> Everybody is invited.
>
> Nil
>
> On 4/1/21 9:27 PM, Michele Thiella wrote:
> > 25.5 from now, i hope :)
> >
> > Il giorno giovedì 1 aprile 2021 alle 19:29:39 UTC+2 logi...@gmail.com
> ha
> > scritto:
> >
> >  9pm EET works for me.. is that 1.5 hours from now or 25.5 hours
> > from now?
> >
> > On Thu, Apr 1, 2021 at 8:19 AM Michele Thiella 
> > wrote:
> >
> > Could it be around 9pm EET?
> > it's a completely different time but should it be available for
> > everyone?
> >
> > Michele
> > Il giorno giovedì 1 aprile 2021 alle 16:29:53 UTC+2 Nil ha scritto:
> >
> > Sure! The place is
> >
> > https://meet.jit.si/proto-agi 
> >
> > the time is
> >
> > 10:45am EET
> >
> > Unfortunately probably too early if you're in the US.
> >
> > Michele, maybe we could do a last minute change to fit the
> > US timezone
> > as well? With the risk of adding confusion though.
> >
> > I'll try to record the call, BTW.
> >
> > Nil
> >
> > On 4/1/21 5:08 PM, Douglas Miles wrote:
> > > May I sit in on the meeting as a fly on the wall?
> > > If so, when/how shall I connect?
> > >
> > > Thanks in advance!
> > > Douglas Miles
> > >
> > > On Thu, Apr 1, 2021 at 2:52 AM Michele Thiella
> >  > >  >> wrote:
> > >
> > > Hi Nil,
> > > you're right! currently EET corresponds to the Italian time!
> > > Great, then I might be a few minutes late because I have
> > a lesson
> > > first. But surely 10.45am EET can work!
> > >
> > > Also for me, no problems for those who want to join!
> > > Thanks for the PLN link. See you tomorrow.
> > >
> > > Michele
> > > Il giorno mercoledì 31 marzo 2021 alle 14:40:38 UTC+2 Nil
> > ha scritto:
> > >
> > > Hi Michele,
> > >
> > > On 3/27/21 12:12 PM, Michele Thiella wrote:
> > > > Is there any recommended book/paper to study before the
> > code
> > > of PLN rules?
> > >
> > > Search for Probabilistic Logic Networks in
> > >
> > >
> >
> https://wiki.opencog.org/w/Background_Publications#Books_Directly_Related_to_OpenCog_AI
>>
>
> >
> > >
> > > >
> > > >
> > > > > Finally, in practice what does PLN do/have more than
> > URE?
> > > >
> > > > The URE is a generic rewriting system, that needs a
> > rule set to
> > > > operate.
> > > >
> > > > See for more info
> > > >
> > > > https://wiki.opencog.org/w/Unified_rule_engine
> > 
> > >  > >
> > > >  > 
> > >  > >>
> > > >
> > > > Such rule set can be PLN, which has been specifically
> > > tailored to
> > > > handle
> > > > uncertain reasoning
> > > >
> > > > https://github.com/opencog/pln
> > 
> > >  > >
> >  > 
> > >  > >>

Michele Thiella

unread,
Apr 6, 2021, 7:03:41 AM4/6/21
to opencog
Hi Linas,
don't worry, no rush!

Il giorno venerdì 2 aprile 2021 alle 20:05:23 UTC+2 linas ha scritto:
But then, without a "true-AGI" learning, I'll never have a "true-AGI" knowledge base and without that I'll not be able to continue, right?

I don't understand the question.

Why work for point 1 if point 2 is a prerequisite?

It's not a pre-requisite!

It seems like a no-win situation. Maybe I'm just a pessimist!

I don't understand.

There will be another way... In the end, our knowledge base was also helped by our parents in some way.

? I don't understand what our parents have to do with this...


It was just a personal reflection. I mean that I cannot get a project to AGI without a learning algorithm (because the knowledge base would then surely be hand-crafted).

Regarding the parents, I didn't know how to explain.
My idea is that maybe I wouldn't rule out supervised learning. Because human learning is sometimes guided by a teacher, who gives you the image of a horse and also tells you that it is a horse.

 
The primary benefit of scheme is that it is functional programming, and learning how to code in a functional programming language completely changes your world-view of what a program is, and what software is.  If you only know C/C++/java/python, then you have a very narrow, very restricted view of the world. You're missing a large variety of important concepts in software. Yes, learning functional programming is "good for you".

I took a course on the semantics and type system of a functional mini-language. Now I'm learning the practical code!
 

Can I ask you to say something about tree of decisions in Eva? Was it a separate scheme/python module that analyzed SequentialAnd?

No, it was just plain Atomese.

Many Atoms have an execute method (actuall, all Atoms have an execute method, but it is non-trivial on only some of them.)

The execute method on SequentialAnd simply steps through each Atom in it's outgoing set, and asks "are you true?" -- by calling execute, and seeing if it returns "true". If some atom in the outgoing list returns "false", then SequentialAnd stops and returns false. Otherwise, it continues till it reaches the end of the list, and then returns true.

There is no "external module" to perform this analysis.

While i'm at it, I can't place some components in your architecture:
I read Moshe Looks thesis on MOSES and what I found on OpenPsi. But in practice what were they used for?

I used MOSES to analyze medical notes from a hospital (free-text doctor and nurses notes) and predict patient outcomes. Some other people used MOSES to try to predict the stock market. Ben/Nill used it to hunt down genes that correlate with long life.

OpenPsi was used as an inspiration for a kind-of combined prioritization-plus-human-emotion-modelling system. It was, still is problematic, for failing to separate these two ideas. There are many practical problems in AtomSpace applications that lead to a combinatorial explosion of possibilities, and one part of open-psi seems to be effective in deciding which of these possibilities should be explored first.  Unfortunately, the design combined it with a really terrible model of human psychology, and this lead to a mass of confusion that was never fully resolved. it doesn't help that the creator of micro-psi came back and said that open-psi has no resemblance to micro-psi whatsoever. There are some good ideas in there, but the implementation remains problematic.
 
Finally, in practice what does PLN do/have more than URE?

I suppose Nil answered this already, but ... PLN defines a certain specific set of truth-value formulas. URE doesn't care about truth value formulas.

URE can chain together rules, -- arbitrary collections of rules. PLN is a specific collection of rules, and they are not only specific rules, but they are coupled with specific formulas for determining the truth value.

So, for example, consider chaining implications: If A implies B and B implies C then A implies C. This is a "rule" that recognizes an input of two pairs (A,B) and (B,C), and creates the pair (A,C) if the truth of A is T. it marks the truth of C as being T. A variant of this is Bayesian deduction, where the truth values are replaced by conditional probabilities.

URE doesn't care what kind of rule it is, or what happens to the truth values. The rules could be non-sense, and the formulas could be crazy, and URE would still try to chain them.

 
Thanks for these explanations, I'm continuing to expand my knowledge!


If your machine is incapable of talking, it would be hard to argue that it's smart. Now, dogs, cats, crows and octopi can't talk, and for centuries, some people (many people) believed they weren't smart. Well, now I think we all know better, but still, the best way to prove how smart or stupid you are is to open your mouth.
 
if i didn't want interactions with humans could i do it differently?

Well, you could build a self-driving car.  But I don't think Elon Musk is claiming that FSD is AGI.

A certain variation of the sensor values already represents "the forward movement", I do not need to associate a name with it if I don't speak,
also for the Atom "bottle" I could use its ID instead. 
I don't understand why removing natural language implies having an inference devoid of "true understanding". 

You know the expression "writing about music is like dancing about architecture"? Well, you could build a robot that dances, but you would have a hard time convincing anyone that its smart, that it's anything other than a clever puppet.
 

Stupid example: If I speak Italian with a French, neither of us understands the other. But a bottle remains a bottle for both and if I give him my hand he will probably do it too ... or he will leave without saying goodbye.

It's all very contextual. If you speak Italian, and you see a human, you assume that what you see has all the other properties of being a human. If you speak Italian, and you see a robot with a mechanical arm, you assume that it has all the typical properites of a robot: stupid and lifeless, just a machine.

-- Linas

Ok so, in summary: either I make it talk or I have to invent another way to demonstrate his intelligence!
I will have to think better about all this, elaborate the concepts that have been said.

In the meantime, thanks for everything Linas.

Michele

Michele Thiella

unread,
Apr 6, 2021, 7:04:00 AM4/6/21
to opencog
Thanks again Nil for the meeting
and thanks to all participants!

I saw your push, I'll try to see if everything works for me too.
And I'll let you know the direction of my thesis soon!

Michele

Linas Vepstas

unread,
Apr 6, 2021, 12:28:13 PM4/6/21
to opencog
Hi Michele,

On Tue, Apr 6, 2021 at 6:03 AM Michele Thiella <acikoa...@gmail.com> wrote:

It was just a personal reflection. I mean that I cannot get a project to AGI without a learning algorithm (because the knowledge base would then surely be hand-crafted).

Yes, everything must be boot-strapped.


My idea is that maybe I wouldn't rule out supervised learning. Because human learning is sometimes guided by a teacher, who gives you the image of a horse and also tells you that it is a horse.

The difference between human learning and machine learning is that human learning is considered to be "one-shot". With only a few sentences, you tell a kid that "this is a horse", and then for a few minutes, the kid watches the horse trot about the paddock, or maybe just pull on some grass, and then, months later, the kid might see a horse far off in the distance, from a car window, and say "oh look there's a horse!"

Supervised learning is something completely different. It is a curated corpus of thousands or tens of thousands of carefully selected photos, with a horse in the center of the frame, filling about half the frame or more, in high contrast, good lighting, good pixel resolution.  A small army of grad students created that corpus, carefully drawing a red box around each horse. That corpus is then enshrined as the "WhatsamattaU. Standard Reference Horse Corpus" and is widely shared, and all the machine learning experts use it to measure accuracy, which is always 86% to 91%, except for their new algorithm, which achieves an accuracy of 91.54% on this corpus.

Supervised learning is not human learning; supervised learning is the careful encoding of human knowledge into a dataset such that a machine can reproduce. It's a lot like automatically "learning" a new compression algorithm, that can compress and decompress files in a slightly lossy fashion. The goal is not to compress files, but to understand what's in those files. The kid already knows that horses eat grass and trot around paddocks. The machine has no clue what eating is, or what a paddock is. You'd need a small army of grad students to create a corpus for those concepts.

"It's turtles all the way down."


Ok so, in summary: either I make it talk or I have to invent another way to demonstrate his intelligence!

Yes.

-- Linas
 

Michele Thiella

unread,
Apr 28, 2021, 4:38:40 AM4/28/21
to opencog
Hello Nil, hello Linas and hello everyone,

First of all, Nil, I have spoken to my supervisors and unfortunately I will not be able to develop your ROCCA project. 
I'll try to follow the developments, since you explained to me how the code works (thanks again).

Instead, I focus on solving the blocksworld problem (and then expand the project by adding communication)
So, I'm studying how URE inference works.
My test repository can be found here: https://github.com/raschild6/blocksworld_problem

I don't understand what I'm doing wrong ..

- Why the backward chaining fails to resolve the goal?
- Also, I think I don't quite understand how fuzzy conjunction introduction and elemination rules work.

I have other questions related to the URE log but for now I would like to understand these.

(I don't know if it's better to open a new conversation)
Thanks again for your help, sorry for the inconvenience!

Michele

Anatoly Belikov

unread,
Apr 28, 2021, 6:17:21 AM4/28/21
to ope...@googlegroups.com
Planning in my example didn't work due to certain assumptions being made in URE. So let's say URE comes up with a nested bindlink like that:

(ExecutionOutputLink(stack c a)
...
    (ExecutionOutputLink(stack a b)
        (ExecutionOutputLink (pickup a) ) )

When it evaluates (stack c a) all atoms introduced by(stack a b)  and  (pickup a) are present in the atomspace. So preconditions of stacking c on a are not satisfied(a is both being 'held' and not 'held'). Probably there is a simple workaround like placing all new facts into separate ContextLink. Such simple planning problems are more naturally expressed in new opencog-hyperon, so I lost the motivation for turning URE into a planner. Besides all planners rely on heuristics to guide the search, so even if you will make URE work on this particular small example you'll have to do some more work to integrate them in URE.



ср, 28 апр. 2021 г. в 12:38, Michele Thiella <acikoa...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
Apr 28, 2021, 8:56:07 AM4/28/21
to opencog
Hi Anatoly,

yes, I started from your example!
I had sensed that there was a problem with the existence of opposing atoms. I would love if someone were able to show me a simple workaround!
Or maybe some advice on how to correctly build ContexLinks so that only those necessary for each nesting are considered.

Alternatively, regarding Hyperon, is it already developed enough to solve the blocksworld problem?

For now, it would be enough for me to solve only this simple problem.
Thanks a lot for the answer

Linas Vepstas

unread,
Apr 28, 2021, 2:37:43 PM4/28/21
to opencog
On Wed, Apr 28, 2021 at 5:17 AM Anatoly Belikov <awbe...@gmail.com> wrote:
Planning in my example didn't work due to certain assumptions being made in URE. So let's say URE comes up with a nested bindlink like that:

(ExecutionOutputLink(stack c a)
...
    (ExecutionOutputLink(stack a b)
        (ExecutionOutputLink (pickup a) ) )

When it evaluates (stack c a) all atoms introduced by(stack a b)  and  (pickup a) are present in the atomspace. So preconditions of stacking c on a are not satisfied(a is both being 'held' and not 'held'). Probably there is a simple workaround like placing all new facts into separate ContextLink. Such simple planning problems are more naturally expressed in new opencog-hyperon,

Could you explain (with enough  detail) how it is more natural?  I am very much interested in allowing natural expressivity in the atomspace. 
 
so I lost the motivation for turning URE into a planner.

It is not at all obvious that the URE is a suitable platform for being a planner, anyway -- certainly, I would not have recommended that course of action.

Planning can be viewed as a form of a constraint satisfaction problem, and the best constraint satisfaction solver that I know of is ASP, and specifically, the Potsdam solver. It is ideal for solving anything with crisp-boolean constraints.  It does have some callbacks that should allow it to work with "fuzzy" constraints, but I have never tried these.

I would like to advocate that, for planning and for constraint satisfaction, that the Potsdam solver should be integrated so that it can work with declarative data in the AtomSpace.  This would make it "kind of like" the URE, but really quite different in many ways.

I don't think that writing a new planner, either in Hyperion or in the atomspace, is a good idea. You will almost certainly create something that is 1000x slower than the best solvers available today.  I want to illustrate this with a story.

In the 1980's, electronic circuit simulation (i.e. chip design rule verification) was done using backward/forward chaining, similar to what the URE does. This was a very established technique, a multi-billion dollar industry with half-a-dozen vendors and another half-a-dozen in-house, proprietary solutions. In the late 80's, early 1990's, new planners and verifiers were created using SAT solvers. For chips of that era (about 500K transistors) the SAT solvers and the backward/forward chainers were about equal in performance. For chips with 2M transistors, the SAT solvers were 2x or 4x faster than the backward-forward chainers.  For chips with >5M transistors, SAT was 10x, 20x faster than backward/forward, and the established chip-planner and verification houses were going bankrupt, because all of their customers had transitioned to the new SAT solver technology.

This is a multi-billion dollar lesson in technology, and you ignore it at your own peril.  Trying to reinvent your own planner is perhaps a good homework exercise to learn basic principles, but it is not sound software engineering.  I very strongly advise against it.

Now, PLN is a bit different. It was intended to work with probabilities, instead of crisp true/false values. This does make the problem harder.  However, I think there are plenty of clever things one can do, without resorting to backward-forward chaining.  Examples include:

* Given some inputs and rules, assign random crisp true/false values to them (according to a probability distribution), and use ASP to solve the problem.  Repeat 100 times, and take the average. The desired result is that average.

* Most probabilistic problems can be split into two parts: one that is "almost crisp t/f" (the hard constraints) and the fuzzy, soft constraints. Use ASP to solve the crisp parts, and explore the fuzzy/soft parts systematically.

Now, what I say above is "easy to say" but is "hard to do" -- implementing what I suggest is a large project.  But then, in software, nothing is free. facebook and google and amazon employ thousands of engineers because writing good software is hard. Imagining that you can create a new planner out of thin air in a few months is not a realistic dream. Don't repeat history; learn from it.

-- Linas

Besides all planners rely on heuristics to guide the search, so even if you will make URE work on this particular small example you'll have to do some more work to integrate them in URE.



ср, 28 апр. 2021 г. в 12:38, Michele Thiella <acikoa...@gmail.com>:
Hello Nil, hello Linas and hello everyone,

First of all, Nil, I have spoken to my supervisors and unfortunately I will not be able to develop your ROCCA project. 
I'll try to follow the developments, since you explained to me how the code works (thanks again).

Instead, I focus on solving the blocksworld problem (and then expand the project by adding communication)
So, I'm studying how URE inference works.
My test repository can be found here: https://github.com/raschild6/blocksworld_problem

I don't understand what I'm doing wrong ..

- Why the backward chaining fails to resolve the goal?
- Also, I think I don't quite understand how fuzzy conjunction introduction and elemination rules work.

I have other questions related to the URE log but for now I would like to understand these.

(I don't know if it's better to open a new conversation)
Thanks again for your help, sorry for the inconvenience!

Michele

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2680e980-547d-46e2-8404-d272a25d2659n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Ben Goertzel

unread,
Apr 28, 2021, 2:59:49 PM4/28/21
to opencog


Now, what I say above is "easy to say" but is "hard to do" -- implementing what I suggest is a large project.  But then, in software, nothing is free. facebook and google and amazon employ thousands of engineers because writing good software is hard. Imagining that you can create a new planner out of thin air in a few months is not a realistic dream. Don't repeat history; learn from it.



Linas, I am sure that writing a commercial-grade scalable planner is a lot of work and would take a team of competent developers more than a few months

On the other hand, exploring and prototyping new planning algorithms/approaches is a perfectly sensible and feasible thing for a smart student to do over a few months, and I think URE and/or PLN could be reasonable tools for this...

As I understand what is being proposed here is a student research project not a large-scale engineering project...

I note that a lot of the large-scale engineering projects being done at Google and Amazon are based on algorithms developed by grad students via experimentation w/ non-scalable "throwaway code" ... (and ofc many of those grad students then get hired by the tech behemoths and may become engineers working on scalable systems, or may remain algo-focused researchers...)

Exploring constraint-satisfaction-based planning makes sense, but for some planning domains this approach may not be best.  E.g. if you're planning in a highly dynamic environment (as faced say by robots moving around in a house or on the street) then I'm not sure the available constraint satisfaction algos can deal well w/ the needed real-time plan updating...?

My 2 tokens worth...
ben



 

Linas Vepstas

unread,
Apr 28, 2021, 3:25:23 PM4/28/21
to opencog
Ben,

On Wed, Apr 28, 2021 at 1:59 PM Ben Goertzel <b...@goertzel.org> wrote:


Now, what I say above is "easy to say" but is "hard to do" -- implementing what I suggest is a large project.  But then, in software, nothing is free. facebook and google and amazon employ thousands of engineers because writing good software is hard. Imagining that you can create a new planner out of thin air in a few months is not a realistic dream. Don't repeat history; learn from it.


As I understand what is being proposed here is a student research project not a large-scale engineering project...

Anatoly implied that he wants to create a new planner in hyperion. Creating hyperion, and a new planner for it, using "more natural" expressiveness, is .. well, sure you can treat hyperon as a student project consisting of throw-away code. But the way you talk about hyperon is as if it is going to be a replacement for the atomspace, and not as some experimental throw away code.

Anyway, I'm not really talking about throw-away code; I am just saying that comp sci journals, such as the "Journal of Optimization" have thousands of articles on planning algorithms, developed over many decades; Ignoring these, and walking in with a clean-slate hyperon design ... well, don't be naive.
 

Exploring constraint-satisfaction-based planning makes sense, but for some planning domains this approach may not be best.  E.g. if you're planning in a highly dynamic environment (as faced say by robots moving around in a house or on the street) then I'm not sure the available constraint satisfaction algos can deal well w/ the needed real-time plan updating...?

(1) ASP will solve most planning and constraint satisfaction problems in milliseconds.   So you can do a 60-frames-per-second update rate if you wish.

(2) It's not like people haven't done this before. I went to some talk, I think in HKSTP, where someone talked about using SLAM (simultaneous localization and mapping) from ultrasound sensors on some robot, and then hooking that into ASP (specifically, the Potsdam solver was mentioned) to perform basic reasoning tasks ("if the door is closed, try to push the door", "if the door won't push try to pull the door" "else reach for the doorknob") -- and this was not the first time I've heard of ASP being integrated in with probabilistic systems in robotics -- talk titles tend to be along the lines of "how to integrate probabilistic and crisp logic reasoning in robotics" or something similar.


My 2 tokens worth...

Are these one of those bitcoin tokens that everyone is talking about these days?

--linas

Ben Goertzel

unread,
Apr 28, 2021, 3:42:02 PM4/28/21
to opencog
>> As I understand what is being proposed here is a student research project not a large-scale engineering project...
>
>
> Anatoly implied that he wants to create a new planner in hyperion. Creating hyperion, and a new planner for it, using "more natural" expressiveness, is .. well, sure you can treat hyperon as a student project consisting of throw-away code. But the way you talk about hyperon is as if it is going to be a replacement for the atomspace, and not as some experimental throw away code.
>
> Anyway, I'm not really talking about throw-away code; I am just saying that comp sci journals, such as the "Journal of Optimization" have thousands of articles on planning algorithms, developed over many decades; Ignoring these, and walking in with a clean-slate hyperon design ... well, don't be naive.

Ah OK. Multiple threads going on here.

I was referring to Michele's student project which is clearly not
aimed at scalable production code... whether Michele uses Hyperon or
Original OpenCog, it's a student research experiment on
BlocksWorld....

Hyperon is ultimately intended as an alternative to current OpenCog
Atomspace, yes. However the current crude Hyperon prototype code is
definitely NOT intended as an alternative to Atomspace -- yet can
still be used for research experimentation.

> (1) ASP will solve most planning and constraint satisfaction problems in milliseconds. So you can do a 60-frames-per-second update rate if you wish.

Ah cool, good to know!!! My intuition on this is obsolete,
apparently ;p ... will look up some references...

Ben Goertzel

unread,
Apr 28, 2021, 8:05:25 PM4/28/21
to opencog
> Hyperon is ultimately intended as an alternative to current OpenCog
> Atomspace, yes.

Actually I should be more precise here.

It is not yet clear whether we'll end up replacing the current
Atomspace in the Hyperon system... we are open to doing so but this
isn't yet decided...

What is more clear is that we want to replace the current Pattern
Matcher, or at very least deprecate a large percentage of its
functions.... Alexey has written a bunch about why we want to do
this, in the various docs linked from

https://wiki.opencog.org/w/Hyperon:Atomspace

(see esp. the docs linked from the bottom of the page, and links
contained therein)...

In short the design pattern we want to follow is to have

-- a static pattern matcher (which however manages bound variables
fairly sophisticatedly, and can match variables against whole
sub-metagraphs...), which then also gets an efficient implementation
for execution against distributed Atomspaces

-- an Atomese language which is used to do a lot of the more
programmatic stuff done in the current Pattern Matcher, as well as a
lot of the stuff habitually done in Scheme scripts in current OpenCog
usage

This is different from the design pattern used in the current PM,
which embeds an awful lot of sophisticated program-control
functionality into the Pattern Matcher itself (thus making it way more
than a pattern matcher in any conventional sense)...

The current PM seems like much more complex code than the current
Atomspace, but maybe I'm missing something subtle in the latter?

ben

Michele Thiella

unread,
Apr 29, 2021, 5:16:05 AM4/29/21
to opencog
Hi Ben, a great pleasure to meet you!

Il giorno mercoledì 28 aprile 2021 alle 21:42:02 UTC+2 Ben Goertzel ha scritto:
I was referring to Michele's student project which is clearly not
aimed at scalable production code... whether Michele uses Hyperon or
Original OpenCog, it's a student research experiment on
BlocksWorld....

you are right, my project is absolutely not aimed at a development of scalable production code (it would be nice but I would not graduate anymore).
Broadly speaking, the idea is to bring a simple example of the potential of an architecture towards AGI in an "industrial" environment.
Now, what potential can it ever have?
 
My supervisors say that if I solved this simple plan, I could add communication with a human and 
thus get a planner that is certainly inefficient, restricted to the domain of blocks, etc. 
But a planner based on the semantic aspect of the problem, compared to all those solvers who solve problems without understanding their meaning.

Simple example:

let's say that the red cube is a book and the green cube is a glass.
The robot knows this (for now I'm using apriltag above the cubes, but with pointcloud and some good classifiers I think we can recognize objects without labeling them, or better yet with NN on the images).
Anyway, the robot's actions are pick-up and put-down of cubes.
The idea is to tell the robot "Drink" or "Read" and see it take the glass rather than the book.
Finally it would be nice that, if the plan fails (there isn't a book on table), the robot interacts with the human to increase its knowledge base and fill in the gaps that led to the plan to fail.

I'm not sure if this all makes sense, I'm still learning and understanding this wide world.
Maybe you will be able to comment on this idea.

 
Hyperon is ultimately intended as an alternative to current OpenCog
Atomspace, yes. However the current crude Hyperon prototype code is
definitely NOT intended as an alternative to Atomspace -- yet can
still be used for research experimentation.

> (1) ASP will solve most planning and constraint satisfaction problems in milliseconds. So you can do a 60-frames-per-second update rate if you wish.

Ah cool, good to know!!! My intuition on this is obsolete,
apparently ;p ... will look up some references...

So ultimately, could it make more sense for me to work with the current OpenCog or approach Hyperon?

Thank you all for your interest.

Michele
 

Anatoly Belikov

unread,
Apr 29, 2021, 6:18:35 AM4/29/21
to ope...@googlegroups.com
Two best planners from this planning competition are not based on ASP https://ipc2018-classical.bitbucket.io/scores.html, first one is based on deep-learning and second one on a heuristic with a pattern database. In 2014 half of the planners were based on IBM CPLEX solver.. i guess the reason is that one can't have a universally optimal solver for np-hard problems, but one can find some shortcuts for a reasonably small subset of them.

ср, 28 апр. 2021 г. в 22:25, Linas Vepstas <linasv...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
Apr 29, 2021, 6:34:16 AM4/29/21
to opencog
I used CPLEX for an advanced operations research course focused on the TSP problem and it is really powerful as a tool, IMO the most beautiful examples: http://www.math.uwaterloo.ca/tsp/data/art/index.html
However, some of the best results are obtained with heuristics like a Parallel Genetic Algorithm with Edge Assembly Crossover (which I wrote from scratch with enormous effort following the related papers)

Yet they remain solvers without awareness of their actions. I think a planner/solver using atomspace should have different characteristics from them right?

Anatoly Belikov

unread,
Apr 29, 2021, 6:42:17 AM4/29/21
to ope...@googlegroups.com
Could you explain (with enough  detail) how it is more natural?  I am very much interested in allowing natural expressivity in the atomspace.

The task here is to craft wooden-pickaxe in minecraft, but just see how Vitaly introduces "if" into the knowledge base:

(= (if True $then $else) $then)
(= (if False $then $else) $else)













































ср, 28 апр. 2021 г. в 21:37, Linas Vepstas <linasv...@gmail.com>:

Ben Goertzel

unread,
Apr 29, 2021, 11:08:44 AM4/29/21
to opencog, Vitaly Bogdanov, Alexey Potapov
It's realistic to do this using Original OpenCog, and a fairly
interesting challenge. Whether the current Hyperon prototype is
suitable for this sort of experimentation right now is not clear to me
, Alexey or Vitaly could tell you...
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/eb3f85de-53b1-48f4-a802-338949316938n%40googlegroups.com.



--
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

Linas Vepstas

unread,
Apr 29, 2021, 6:32:22 PM4/29/21
to opencog
On Wed, Apr 28, 2021 at 7:05 PM Ben Goertzel <b...@goertzel.org> wrote:
> Hyperon is ultimately intended as an alternative to current OpenCog
> Atomspace, yes.

Actually I should be more precise here.

It is not yet clear whether we'll end up replacing the current
Atomspace in the Hyperon system... we are open to doing so but this
isn't yet decided...

-- a static pattern matcher (which however manages bound variables
fairly sophisticatedly, and can match variables against whole
sub-metagraphs...), which then also gets an efficient implementation
for execution against distributed Atomspaces

Recall what I said earlier about "backward-forward" vs SAT. 
And let me recall a bit of history.

The current pattern matcher grew organically from a very simple device,
into the current very complex code block, as users requested new features.
(You were one of the many "users"). It would be nice to remove some of
the cruftier and uglier parts of it, but that would break assorted existing
systems, e.g. URE or PLN, and Nil would hate me for that. If it weren't
for all these nit-picky people who want things to work "just so", then parts
of it could be simplified. The ability to rewrite and simplify old code is
important to the health and vitality of a project. Maintaining backwards
compat to things that people want makes things difficult. You can only
"move fast and break things" when you have a big staff to fix what's broken.
But I digress...

If you create a new pattern matcher, you will eventually have to deal with
all of the various user requests for new features; either by saying no, or
by accommodating them, or by suggesting alternative ways of doing things.
 
an efficient implementation
for execution against distributed Atomspaces

Well, I am repeating myself. What about the current system is  not efficient?
Have you actually measured performance, and found it lacking?  Have you
been able to create a prototype that is faster?  If the prototype is faster, is
there some fundamental reason for this? Can you articulate that reason?
If you actually understand that reason, would you be able to backport it to
the current atomspace? If not, can you articulate why not?

That's about 5-6-7 hurdles that have to be cleared. Of course, you are free to
skip all of these questions, and start with a blank slate; programmers do this
*all the time*.  In approximately 9 out of 10 cases, the results are sub-optimal.
Yes, psychologists of software development have studied stuff like this. The
Western World has been doing software dev for half a century. Annecdotes,
experiences, war stories, etc. accumulate.
 
-- an Atomese language which is used to do a lot of the more
programmatic stuff done in the current Pattern Matcher,

The pattern matcher does nothing at all "programmatic" so I don't
know what this refers to.

as well as a
lot of the stuff habitually done in Scheme scripts in current OpenCog
usage

Such as?

Certainly, cruft has accumulated in the scheme scripts as well. They should
viewed as a "rapid prototyping system", where new ideas are tested. If the
ideas are good, then, yes, you can convert them into Atomese.  But again,
I caution: not every idea is good. Many ideas which seem good this week or
this year turn out to be bad ideas next year.

You need a process of bad-idea-elimination, and this is hard when you have
users who want the code to work "just so", even if "just so" is an old bad idea.

This is different from the design pattern used in the current PM,
which embeds an awful lot of sophisticated program-control
functionality into the Pattern Matcher itself (thus making it way more
than a pattern matcher in any conventional sense)...

It's misnamed. It should be called "the query engine". It is somewhat
typical of query engines in other databases. Although the big brand-name
ones are far far more complex and sophisticated than what the atomspace
pattern matcher does. Conversely, the pattern matcher does things that
these other query engines cannot do (lookin at you, SQL, SparQL, gremlin,
tinkerpop, jquery, GraphQL ...)


The current PM seems like much more complex code than the current
Atomspace, but maybe I'm missing something subtle in the latter?

The atomspace is tiny. It is less than 2KLOC of code. It took a lot of time
and a lot of effort to simplify it down to something that small.

The pattern matcher is 7KLOC of rather complicated code.

Atomese is more than 30KLOC of code. It is by far the biggest part
of what is in the atomspace.

Scheme bindings are about 6KLOC, python bindings are 5KLOC
-- Note that these are roughly the same size to each-other.
-- Note that they are only a little smaller than the pattern matcher.
-- Note that they are both 2x or 3x bigger than the atomspace.

There are 45KLOC of unit tests. Note that this is about the same size
as the grand-total of everything else above.

Since we're counting:
-- there's about 10KLOC of URE code.
-- there's about 10KLOC of PLN code
Note that both of these systems are larger than the pattern matcher, and that PLN requires URE, so that these two, combined, are 3x larger than the pattern matcher.

Perhaps these numbers will help with thinking about where all the complexity actually resides.

-- Linas

Michele Thiella

unread,
May 24, 2021, 12:27:25 PM5/24/21
to opencog
Hello everyone,

Finally, I was able to pass the first planning test for the blocksword problem, using ContextLinks.
(For now, it has some ad-hoc things/rules and others that are missing)

But, as long as I look for a column of 3 blocks everything is fine and the times for the BC are very short, 
while when I look for a column of 4 or more I go into RAM overflow.
Unfortunately, I'm on Linux on an external sdd and the Swap area is there. 
Consequently, with a goal of 4 blocks, I use more than 8 Giga (I only have 8) and it starts swapping but the time gets longer and I can't finish the execution.

Would anyone be able to run the test_pickup_stack.scm file? and share me the log file?
Thanks a lot in advance!
(There should be no errors, just do (load "path/to/file/test_pickup_stack.scm") in the telnet shell. Report me if there is something wrong, thanks!)

I'm playing with the URE parameters to see if I can optimize the inference.
(extra question) is there a URE parameter to terminate at the first BC solution found?

Michele

Linas Vepstas

unread,
May 24, 2021, 2:08:04 PM5/24/21
to opencog, Nil Geisweiller
I know nothing about the blocksworld problem, so I cannot help directly.  Indirectly, you can use (cog-report-counts) to monitor the number of atoms in the atomspace -- I typically see an average of about 1KB or 2KB per atom.  So, a few GB is enough for millions of atoms, normally. This will give you a hint of what might be going on there.

The only "problem" is that URE uses some temporary atomspaces; those are not included in the count. The URE also mallocs structures that are not part of the atomspace.

There is a third but unlikely issue -- guile garbage collection not running often enough.  Take a look at (gc-stats) to get info, and (gc) to manually run garbage collection. It's unlikely this is a problem, but there were issues with older guile-- say, version 2.0. I'm hoping you are on version 3.0, or at least version 2.2.

Perhaps @Nil Geisweiller can help with URE ram issues.

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
May 24, 2021, 2:23:23 PM5/24/21
to opencog
Great I really needed some monitoring! I'll take a look at these numbers!

Another thing I forgot: is there a way to get the inference tree related to a solution obtained from BC?
I saw that there was a scheme code somewhere but I didn't understand how it worked. Thank you

PS. Blocksword Problem (very briefly) = some blocks on table, 4 action (pick-up, put-down, stack: block 1 above block 2, unstack: pick-up block 1 which is above block 2), objective: to build a tower of blocks

Michele

Michele Thiella

unread,
May 25, 2021, 10:02:23 AM5/25/21
to opencog
I know nothing about the blocksworld problem, so I cannot help directly.  Indirectly, you can use (cog-report-counts) to monitor the number of atoms in the atomspace -- I typically see an average of about 1KB or 2KB per atom.  So, a few GB is enough for millions of atoms, normally. This will give you a hint of what might be going on there.

Ok i did some tests. 
Guile version is 2.2 (I'm on ubuntu and haven't figured out if Guile 3.0 is available for me)

1) The first test tries to solve the BC with a goal corresponding to a column of 3 distinct blocks. Execution time = a few seconds.
2) The second test (which is the one inside my repo) tries to solve the BC with a goal corresponding to a column of 4 distinct blocks. Execution time = infinite! 

Running (cog-report-counts) I get these results :

1 test)
((ConceptNode . 8) (NumberNode . 2) (PredicateNode . 9) (SetLink . 5) (ListLink . 30) (MemberLink . 7) (ContextLink . 5) (AndLink . 67) (NotLink . 14) (PresentLink . 7) (VariableNode . 11) (VariableList . 6) (DefineLink . 7) (BindLink . 7) (EvaluationLink . 44) (TypeNode . 6) (TypeChoice . 2) (TypedVariableLink . 11) (EqualLink . 14) (ExecutionOutputLink . 7) (SchemaNode . 2) (DefinedSchemaNode . 7) (GroundedSchemaNode . 6) (InheritanceLink . 7) (ExecutionLink . 2))

2 test)
((ConceptNode . 8) (NumberNode . 2) (PredicateNode . 9) (SetLink . 4) (ListLink . 11) (MemberLink . 7) (ContextLink . 5) (AndLink . 7) (NotLink . 17) (PresentLink . 7) (VariableNode . 16) (VariableList . 6) (DefineLink . 7) (BindLink . 7) (EvaluationLink . 25) (TypeNode . 6) (TypeChoice . 3) (TypedVariableLink . 19) (EqualLink . 17) (ExecutionOutputLink . 7) (SchemaNode . 2) (DefinedSchemaNode . 7) (GroundedSchemaNode . 6) (InheritanceLink . 7) (ExecutionLink . 2))

(Obviously I think that the first test is solved so there are more atoms than the second that cannot continue. Anyway they still seem very few to me)


In my BC, the fuzzy-conjunction-introduction-rule (changing types) is run first to extract the atoms from an AndLink ( which is the goal = column of blocks).
Next, the inference trees of the extracted atoms will be expanded and the BC will end with the result.

The only difference between the two tests is that:

1) In the first test, the rule is executed with nary = 5  (2 EvaluationLink for "A on B" and "B on C" + 3 NotLink to differentiate the three blocks)
2) While in the second one, the rule is performed with nary = 9  (3 EvaluationLink for "A on B" and "B on C" and "C on D" + 6 NotLink to differentiate the four blocks)


Looking at the RAM occupation ("free -h" shows at the start 4.6 Giga occupied out of 7.6 total):

1 test) RAM DOES NOT CHANGE.
2 test) I exceed 8 Giga and I start swapping. Only after half an hour, the selection of the first rule (fuzzy-conjunction-introduction-rule) is printed on the logger. 
            So it looks like it works fine (as it should) but is extremely slow (i think due to my swap area is on external ssd, like for linux).

Summing up:

- I'm assuming it's RAM's fault
- in test 2, if I remove a single NotLink from the goal, and so the fuzzy-conjunction-introduction-rule is executed with nary = 8 (instead of 9), it works! It takes a few seconds (but I lose a condition of non-equality between 2 blocks). 
It seems absurd to me that with 8 atoms it works and with 9 it doesn't ...

- Someone would be kind enough to try running test_pickup_stack.scm file and share me the log file?
and just use a command to load the file, as written in the readme.
In this way I could understand if it is a problem with my pc or with the code. Sorry for the inconvenience and thanks in advance.


The only "problem" is that URE uses some temporary atomspaces; those are not included in the count. The URE also mallocs structures that are not part of the atomspace.

There is a third but unlikely issue -- guile garbage collection not running often enough.  Take a look at (gc-stats) to get info, and (gc) to manually run garbage collection. It's unlikely this is a problem, but there were issues with older guile-- say, version 2.0. I'm hoping you are on version 3.0, or at least version 2.2.

Perhaps @Nil Geisweiller can help with URE ram issues.

--linas


- (gc-stats) before doing anything:
((gc-time-taken . 281744537) (heap-size . 6316032) (heap-free-size . 323584) (heap-total-allocated . 47119248) (heap-allocated-since-gc . 2700880) (protected-objects . 15) (gc-times . 19))

- (gc-stats) after 1 test:
((gc-time-taken . 305335411) (heap-size . 6316032) (heap-free-size . 372736) (heap-total-allocated . 50000112) (heap-allocated-since-gc . 2775904) (protected-objects . 15) (gc-times . 20))

- (gc-stats) in the 2 test when I ran out of RAM:
((gc-time-taken . 334108055) (heap-size . 6316032) (heap-free-size . 1228800) (heap-total-allocated . 54519584) (heap-allocated-since-gc . 1691024) (protected-objects . 16) (gc-times . 22))

I'm trying to understand their meaning. Do these numbers tell you something?


Thanks again for the help.

Michele

Anatoly Belikov

unread,
May 25, 2021, 10:11:37 AM5/25/21
to ope...@googlegroups.com
You introduced ContextLinks, but it doesn't provide full state for each action. For example in pickup you create (EvaluationLink (PredicateNode "not-clear") (VariableNode "?ob")) for some object "?ob", but there are probably other objects that are not free. In c++ we can represent a state by boolean array. When applying some action we would copy the bit-vector of the previous state and modify some bits. You can emulate this behaviour by using ConceptNodes for states. You can store values of all boolean variables as properties of ConceptNode. You will have to rewrite preconditions to call to python or scheme function:

; precondition of pickup-action
        (AndLink
            (PresentLink
                (InheritanceLink
                    (VariableNode "?state")
                    (ConceptNode "state")))
            (PresentLink
                (InheritanceLink
                    (VariableNode "?ob")
                    (ConceptNode "object")))
            (EvaluationLink
                (GroundedPredicateNode "py: can_pickup"),
                ListLink(
                    (VariableNode "?state")
                    (VariableNode "?ob")
                )
            )

in can_pickup you check if passed (state, object) pair satisfies the precondition.

In ExectionOutputLink you can create a new ConceptNode for a new state by copying all the properties from the previous state with necessary changes.


пн, 24 мая 2021 г. в 20:27, Michele Thiella <acikoa...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
May 25, 2021, 11:11:56 AM5/25/21
to opencog
For now I have only added the ContentLinks and the rules to work with to solve the BC of the blocksworld problem 
(for example the put-down and unstack rules are missing).

It is true that I do not take into account a full state when I do an action. 
This would make sense in a general case, so that any action that is added is correct regardless of the others, because it is based only on the state.

Instead, the correctness of my current structure lies in the last bindlink, the one that corresponds to the complete inference tree that solves my goal.
The conditions for replacing VariableNodes with ConceptNodes are accumulated among the various rules used in the tree.
So at the end the blocks will be stacked correctly and the result will correspond to the probabilistic permutations of the n blocks on the table taken at X (= number of blocks in my column)

For now it works (except for this RAM problem), but the unstack and put-down rules are not there, because they are actually not needed for correct planning.

I feel like I'm a bit cheating.
If the correct idea is to use a state it will be good to rewrite the rules.

Although, I didn't quite understand what the problem with my structure is.
If I do BC I can't have partial solutions so I just need the final bindlink to contain all the necessary conditions, I don't need them to be verified for every rule.

Certainly I have not explained well, perhaps because I have not understood the matter well.

Michele

Linas Vepstas

unread,
May 25, 2021, 5:16:20 PM5/25/21
to opencog, Matthew Ikle
OK, Some AI theory, below. Maybe Ben, who has driven a lot of this, can pipe up to clarify. Maybe Matt Ikle might have something to add.

On Mon, May 24, 2021 at 1:23 PM Michele Thiella <acikoa...@gmail.com> wrote:
Great I really needed some monitoring! I'll take a look at these numbers!

Another thing I forgot: is there a way to get the inference tree related to a solution obtained from BC?

The C++ code contains "BIT" structures, which hold the inference tree. I don't know how to access it.

I saw that there was a scheme code somewhere but I didn't understand how it worked. Thank you

PS. Blocksword Problem (very briefly) = some blocks on table, 4 action (pick-up, put-down, stack: block 1 above block 2, unstack: pick-up block 1 which is above block 2), objective: to build a tower of blocks

OK. So this exposes a subtle issue in AI. So, 1970's-onward GOFAI used crisp-logic, true-false values to determine the satisfiability of any given inference, deduction, rule. etc.  Somewhere along the way, it was realized that this is not enough, and that one needed something like "fuzzy logic" or "Bayesian probability" or even "non-montonic reasoning" or stuff like that, to obtain suitable inferences in many real-world situations.  

Ben's PLN (Probabilistic Logic Networks) was proposed in that context, in that environment.  It consists of a collection of rules, generally resembling those of conventional predicate logic (as found in textbooks, wikipedia pages, etc.) except that they are modified to incorporate probability, generally resembling those of Bayesian statistics, (again, as found in conventional textbooks) plus an extra parameter, the "confidence", to model/control the spread of uncertainty.  The grand dream here is that this is enough to handle real-world logic and real-world reasoning.  The technical difficulty here is combinatoric explosion: tracking probabilities means having to track many many more possibilities during inference: gazillions of inference trees. Devices like "attention allocation" and even "openpsi" are invented to limit and control the search space.

(Aside: you asked about "scheme" -- the inference rules are encoded in scheme. The actual inference engine is written in C++. The software engineering goal was to separate the mechanics of inference from the specific weights, formulas, inputs, outputs of the rules. That's why the inference tree is in C++, not in scheme.)

How well does this work out in real life? Well, I claim that blocks-world exposes some fundamental failings, mismatches, mis-understandings of a probabilistic approach. One is "obvious": for perfect blocks-world physics, say, minecraft, where blocks stack perfectly and never fall down, you don't need probabilistic reasoning.  In this sense, using PLN to solve blocksworld problems is grossly inefficient and inappropriate. A crisp logic solver and motion planner should be enough. God knows, there must be a zillion  of these: this is firmly entrenched in mainstream textbook GOFAI. As to whether any of them are on github, are maintained, and actually work.. I would not be surprised if the answer is zero. (such code is difficult to maintain, yet not commercially $$$ interesting. So no one does it.)

In the real world, you cannot stack blocks arbitrarily highy: you can't stack them evenly. They are not perfect cubes. There's friction and wind gusts and grains of sand. Towers loose balance and fall down. So, how well does probabilistic reasoning handle this situation? Well, but it doesn't. The generic stacking problem is still best addressed via crisp-logic motion planning. The physics of imperfect cubes becomes a mechanical engineering problem: what is the best way to model non-parallel cub faces? off-center stacking? a center of mass that is not at the center? Wobbliness due to non-flat faces? You can't just say "probability" or "fuzzy logic" and get any kind of reasonable answer. You really are dealing with a mechanical engineering problem, here. PLN is not a mechanical-engineering solver.

Consider the stacking of even one cube on top another. The result depends on how slippery the faces are, and the local direction of gravity.  You can't stack cubes on the side of a hill.

I'm grasping for something that is hard to put into words. The "axes of representation" of probability do not line up with the "axes of mechanical motion".  Although one might be able to say that "if two blocks are stacked with a center-of-mass offset of x, then the probability of falling over is p", this statement is not composable: you can't say "ah ha, therefore, the probability for stacking two blocks is (some formula involving p)"  Non-composable statements cannot be used in inference chains.  Yet, mechanical engineering (or, at least, mechanical reasoning) is very much about  inference chains.

You could say: "we've got the wrong inference rules" -- the inference rules should be about inclined planes and levers, pulleys and ropes (blocks and girders and i-beams...). Walking this path, one very quickly reinvents the history of CAD/CAM software. We know where that goes and its not really AI.

So what is PLN good for? I honestly don't know.  I know that it can solve a collection of toy probabilistic inference models, but I am not aware of any real-world inference problems that can be mapped into that.

I am trying to grasp ahold of an idea here, but none of my words above directly grab it.  There is a certain grammar of reasoning. This grammar is crisp. For mechanical engineering, this grammar can be attached to differential equations. The combination of this can be replaced by rules-of-thumb, which are "crisp", while always concluding with "it depends".

To conclude:
1) you should study all the usual crisp-logic reasoning and motion-planning and constraint-satisfaction algorithms. This provides an important background.  For blocks-world, you might want to find and employ (or construct) a crisp-logic motion planner.

2) You should study probability and machine learning. This is also important.

If/when you get past the above, I invite you (or anyone else) to join me in the exploration of how to extract inference rules from observations. I have an inkling of how to do this. I am trying to do it in the github project https:://github.com/opencog/learn but it is extremely difficult.  The goal of the project would be to (for example) extract the rules of mechanical engineering by playing with blocks. The discovery of the rules of thumb of stacking imperfect blocks. 

--linas

 

Linas Vepstas

unread,
May 25, 2021, 5:26:24 PM5/25/21
to opencog
On Tue, May 25, 2021 at 9:02 AM Michele Thiella <acikoa...@gmail.com> wrote:

1 test)
((ConceptNode . 8) (NumberNode . 2) (PredicateNode . 9) (SetLink . 5) (ListLink . 30) (MemberLink . 7) (ContextLink . 5) (AndLink . 67) (NotLink . 14) (PresentLink . 7) (VariableNode . 11) (VariableList . 6) (DefineLink . 7) (BindLink . 7) (EvaluationLink . 44) (TypeNode . 6) (TypeChoice . 2) (TypedVariableLink . 11) (EqualLink . 14) (ExecutionOutputLink . 7) (SchemaNode . 2) (DefinedSchemaNode . 7) (GroundedSchemaNode . 6) (InheritanceLink . 7) (ExecutionLink . 2))

This is a tiny atomspace. Fits in less than a megabyte.


- (gc-stats) in the 2 test when I ran out of RAM:
((gc-time-taken . 334108055) (heap-size . 6316032) (heap-free-size . 1228800) (heap-total-allocated . 54519584) (heap-allocated-since-gc . 1691024) (protected-objects . 16) (gc-times . 22))

I'm trying to understand their meaning. Do these numbers tell you something?

Time is in nanoseconds. (gc-time-taken . 334108055) means you spent 1/3rd of a second in GC.  Size is in bytes: (heap-size . 6316032) means that guile has allocated 6 megabytes of RAM.  The total shown here (heap-total-allocated . 54519584) means 54 MBytes allocated, and I guess 48MBytes freed, leaving the rest in the current heap.  GC was called a total of 22 times.

In short, you are not leaking memory here. The grief is something deep in the guts of the chainer/URE.

--linas

Linas Vepstas

unread,
May 25, 2021, 5:39:28 PM5/25/21
to opencog
Let me balance my earlier theoretical remarks with some clarifications.

-- Bindlinks are more-or-less pure crisp-logic things. Any single one gives a pure, crisp true/false result, and you can certainly write solvers with them.

-- The PLN rules are ways of assembling them into inference trees with non-crisp probabilistic results.

-- Anatoly's idea sounds like a good one to me. Anything you can do to limit the search space is always a good thing.   Things like 

   (PresentLink
                (InheritanceLink
                    (VariableNode "?ob")
                    (ConceptNode "object")))

can vastly speed the search by limiting it only to those ?ob's that are actually objects, instead of trying everything under the sun.  This is very useful in intermediate steps, to avoid combinatoric explosion.

-- I have no clue what ContextLinks are good for. That's a PLN thing.

Linas.



Anatoly Belikov

unread,
May 26, 2021, 6:08:46 AM5/26/21
to ope...@googlegroups.com
It was my suggestion to represent search states with ContexLinks, somehow like that:

(ContexLink
        (ConceptNode "state1")
        (AndLink (
                      (ConceptNode "state0") ; previous state
                      EvaluationLink (stv 0 1)
                              (PredicateNode "not-free")
                              (ConceptNode "block5")))   ; and so on, here are all valid combinations of predicates and it's arguments like in c++ planners i am familiar with.

Now I believe it makes sense to represent the state using values, maybe just with LinkValue and accessor functions.
The research direction might be the integration of heuristic function into URE for selection of the next state to expand.

ср, 26 мая 2021 г. в 01:39, Linas Vepstas <linasv...@gmail.com>:

Linas Vepstas

unread,
May 26, 2021, 12:16:44 PM5/26/21
to opencog
On Wed, May 26, 2021 at 5:08 AM Anatoly Belikov <awbe...@gmail.com> wrote:
It was my suggestion to represent search states with ContexLinks, somehow like that:

Did you mean "represent state transition rules"?  A state transition rule has the form "if (lots of preconditions) then (a move to state1 is allowed)"

I don't recall if we have any preferred or recommended way of writing those. There is a sequence of four state-machine demos, each building on the last:


I notice that the last three use ContextLink to represent state transition rules.  This might be an abuse or misuse of ContextLinks. Certainly, the wiki page for them https://wiki.opencog.org/w/ContextLink suggests something very different.

The four examples above were written before StateLink was invented, and before Values.  They should be reviewed, and maybe re-written or modernized.  Anatoly, would you care to do this? Maybe Michele could be a guinea pig, and tell us what's wrong or unclear about these demos?

The nice thing about StateLink is that it is atomic: There can only ever be one; there are never accidentally two or zero, and it is thread-safe: even if multiple threads are all setting the state at the same time, there will always be just one.

The "ugly thing" about StateLink is that its an Atom, and thus the fastest rate of change is limited by AtomSpace insertion, deletion, which is in the ballpark of 50K/second, depending on how old your CPU is and what language you are using (c++/python/scheme). Value changes are much much faster, I think in the 200K/second or 500K/second range.

Certainly, the basic blocksworld can be represented as a kind of stateful system, and the goal of the URE would then be to find a sequence of moves to get from some position to another.  We really should have a documented, maintained example for the "recommended way of doing this".  Ideally, it would use the same kind of rule-style as the above four examples, so that all the demos are consistent.

I have no clue why the URE is misbehaving.... is it being used incorrectly, or is it broken?  Do I need to personally look at this?

--linas

Anatoly Belikov

unread,
May 26, 2021, 1:32:26 PM5/26/21
to ope...@googlegroups.com
There is some collision in terminology.
The classical planning problem(like blockworld) is defined by a tuple <S, s0, S_goal, A, f>, where S is set of states, s0 - initial state, S_goal - set of goal states, A - set of actions, f - state transition function, f accepts a state and an action and outputs a state. The task is to find a sequence of (state, action) pairs leading from the initial state to the goal state. Unlike FSM states in planning have structure which enables informed search. And there are too many states, if we work in a boolean domain like blockworld then there are 2^(bit-length of the state), we can't build FSM for such problems.

So I meant to use ContextLinks to represent elements of S. While state transition rules in Michel's example are represented by BindLinks. There is one BindLink for each action, which is unlike the definition, maybe it would be better to have one BindLink for the computation of the next state, but I don't see a simple way to write it in pure atomese either. So I suggest working with ListValue for states + EvaluationLinks with grounded predicates.

For the reference: Lipovetzky, N. (2012). Structure and inference in classical planning.

ср, 26 мая 2021 г. в 20:16, Linas Vepstas <linasv...@gmail.com>:

Linas Vepstas

unread,
May 27, 2021, 2:30:57 PM5/27/21
to opencog
Ah!

Thank you Anatoly! Now we are making progress.

On Wed, May 26, 2021 at 12:32 PM Anatoly Belikov <awbe...@gmail.com> wrote:
There is some collision in terminology.
The classical planning problem(like blockworld) is defined by a tuple <S, s0, S_goal, A, f>, where S is set of states, s0 - initial state, S_goal - set of goal states, A - set of actions, f - state transition function, f accepts a state and an action and outputs a state. The task is to find a sequence of (state, action) pairs leading from the initial state to the goal state.

Yes, that seems like a good definition. I believe that many problems are similar, if not the same: e.g. theorem-proving, where `f` consists of axioms and inference rules.  The intent was that the URE would be able to able to solve these kinds of problems, using an API that is close to the above. Crisp solvers such as the Potsdam ASP solver are good at this (well, you'd have to write a bunch of code to convert the above form into ASP, which is a good exercise, but is not basic research).

The task w.r.t. the atomspace is how to represent S, A, f using Atomspace structures, in some compact reasonable, flexible way. For blocksworld, the size of S is literally astronomical, so `f` cannot be a lookup table in the literal sense. Rather, it has to be encoded to accept only that tiny fraction of the grand-total S that we are working with.

Unlike FSM states in planning have structure which enables informed search. And there are too many states, if we work in a boolean domain like blockworld then there are 2^(bit-length of the state), we can't build FSM for such problems.

Yes, absolutely. I did not mean to imply this. I was attempting to say that the representation  of the state transition function used in one demo/example should be similar to, or the same as the representation used in other demos/examples, and both of those should align with whatever the URE would accept as valid.  (FWIW, The OpenPsi codebase has a representation for `f`, but it has changed several times. The OpenPsi codebase solves the same kinds of problems as the URE, but using a radically different approach.  It's actually quite elegant and interesting... It would be nice if it used the same representation.)


So I meant to use ContextLinks to represent elements of S. While state transition rules in Michel's example are represented by BindLinks. There is one BindLink for each action, which is unlike the definition, maybe it would be better to have one BindLink for the computation of the next state, but I don't see a simple way to write it in pure atomese either. So I suggest working with ListValue for states + EvaluationLinks with grounded predicates.

Two remarks.  First remark

1) BindLink is an "active" link, and not a "knowledge representation" link.  So, for example, implication, if(p) then q; can be presented with the ImplicationLink P Q.  That's it -- nothing happens, because ImplicationLinks don't "do anything".  You can also represent it as BindLink P Q which is "equivalent" to ImplicationLink for static knowledge representation.  The difference is that you can actually execute BindLink, and it will change the contents of the atomspace. The need to execute it has, over time, forced a complicated and sometimes-crufty syntax onto it: You must use VariableNodes, You must use PresentLinks, etc. etc.  ImplicationLink does not have any of these arcane requirements.

So, I still haven't glanced at Michele's code (maybe I should?), but what I am trying to say is that maybe one should not use BindLinks unless the intent is to actually run them.  If the goal is to represent state transitions, then maybe ImplicationLink or ContextLink or something else is enough.

Second remark:

I don't see a simple way to write it in pure atomese

Ah! Well, this is exactly what experimentation is for. Try it, see what happens, and if it doesn't work, try figuring out why not.

I can take a shot at a "pure Atomese" representation, but right now, I still don't quite understand what the issue is. You want .. what ... some way of specifying the "classical planning problem" in a fairly generic fashion, such that it works with this blocksworld thing?

Can we do this by email? I don't want to sit by my lonesome, come up with something, write a dissertation on it, and then have no one pay attention or read it or care.  So a one-step-at-a-time communal activity?

--linas

 

Linas Vepstas

unread,
May 27, 2021, 3:08:16 PM5/27/21
to opencog
OK, I talk too much. Sorry. One last note:

Michele, I just looked at https://github.com/raschild6/blocksworld_problem and I now understand the problem to be "given some PDDL, convert it to atomese"

I suggest that you should restructure the problem into smaller parts, a sequence of individual demos/examples.  So, demo-1-basic-state.scm, demo-2-single-move.scm, demo-3-compound-move.scm, etc. For example: in rules_pickup_stack.scm there are no comments, so I don't really understand what you are trying to do there. I can guess, but guessing is hard.  What is the equivalent PDDL?  I notice that there are no comments in `domain.pddl` either. Again, I can guess, but buessing is hard.

I suggest writing it like so: "here is some PDDL. It means that ?x is a block and ?y is its position and that a block is moved if pqr. Here is the equivalent atomese. The variable ?b is the current block and etc."

I think that if you try to explain it in English, in addition to PDDL and Atomese, this will help clarify for you, and for everyone, what's actually happening...

--linas





On Tue, May 25, 2021 at 10:11 AM Michele Thiella <acikoa...@gmail.com> wrote:

Nil Geisweiller

unread,
May 28, 2021, 1:53:03 AM5/28/21
to ope...@googlegroups.com, Anatoly Belikov
On 5/26/21 1:08 PM, Anatoly Belikov wrote:
> The research direction might be the integration of heuristic function
> into URE for selection of the next state to expand.

The URE supports control rules, even better these control rules can be
experimentally learned, see

https://wiki.opencog.org/w/URE_Control_Rules

What's missing to bring the URE to the next level, in terms of
efficiency, is to synergize it with ECAN, but that's a big project I
have unfortunately no time for (hopefully soon though).

Nil

>
> ср, 26 мая 2021 г. в 01:39, Linas Vepstas <linasv...@gmail.com
> <mailto:linasv...@gmail.com>>:
> awbe...@gmail.com <mailto:awbe...@gmail.com> ha scritto:
> <https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the
> Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/b0effc4c-a09e-4f55-a00c-a4e16bb17f24n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/b0effc4c-a09e-4f55-a00c-a4e16bb17f24n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA37r38daLXGaN%2BJmYmm0KgDkacnV7a60pE%3D%2BE0M5DY7s4g%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA37r38daLXGaN%2BJmYmm0KgDkacnV7a60pE%3D%2BE0M5DY7s4g%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAFj%2Bw-sWotMa6CPpMiUDMywDS0P-4FDji75YnNoS6M3jwoznyQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAFj%2Bw-sWotMa6CPpMiUDMywDS0P-4FDji75YnNoS6M3jwoznyQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
May 28, 2021, 2:44:34 AM5/28/21
to ope...@googlegroups.com, Michele Thiella
Hi Michele,

On 5/24/21 7:27 PM, Michele Thiella wrote:
> Would anyone be able to run the test_pickup_stack.scm file? and share me
> the log file?
> it's in my repo: https://github.com/raschild6/blocksworld_problem
> Thanks a lot in advance!
> (There should be no errors, just do (load
> "path/to/file/test_pickup_stack.scm") in the telnet shell. Report me if
> there is something wrong, thanks!)

telnet shell? You can use the guile REPL directly, just in case you
didn't know.

I can run it, I takes 10 minutes and ~5GB of RAM. But if your DE
(desktop environment) is heavy, your OS could easily eat up all the RAM.
Maybe try to run it with a lighter DE, or even better without display
server, just in text mode. Or, unless you're willing to get deep into
the C++ URE code to optimize it, buy more RAM. I found that these days
16GB of RAM is a bear minimum for development (let alone AI development).

Please find the log attached, BTW.

> I'm playing with the URE parameters to see if I can optimize the inference.
> (extra question) is there a URE parameter to terminate at the first BC
> solution found?

I don't think so. The URE will terminate either once it has reached the
number maximum iteration or has exhausted the search. All available
parameters can be obtained with

(help cog-bc)

Obviously feel free to add missing parameters if you wish, contributions
are always welcome.

Nil

>
> Michele
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com?utm_medium=email&utm_source=footer>.
opencog.log.xz

Nil Geisweiller

unread,
May 28, 2021, 2:56:28 AM5/28/21
to ope...@googlegroups.com, Michele Thiella
On 5/24/21 9:23 PM, Michele Thiella wrote:
> Another thing I forgot: is there a way to get the inference tree related
> to a solution obtained from BC?
> I saw that there was a scheme code somewhere but I didn't understand how
> it worked. Thank you

Yes, just give it a trace atomspace and all inferences will be dumped
there, see (help cog-bc).

In order to understand how to use it you may study the following example

https://github.com/opencog/pln/tree/master/examples/pln/inference-control-meta-learning

In fact one excellent goal for your project could be to learn control
rules to speed up problem solving. So basically you would

1. Run your problem with varying levels of difficulties, collecting
inference traces.
2. Mine the inference traces to discover control rules.
3. Pass these control rules to the URE to hopefully speed up problem
solving for the next rounds.

I was only able to achieve that for the trivial alphabetic problem in
the link above. Making it work for a less trivial problem would be awesome.

Nil
> <https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
May 28, 2021, 3:06:57 AM5/28/21
to ope...@googlegroups.com, Michele Thiella


On 5/28/21 9:57 AM, Nil Geisweiller wrote:
> On 5/24/21 9:23 PM, Michele Thiella wrote:
>> Another thing I forgot: is there a way to get the inference tree
>> related to a solution obtained from BC?
>> I saw that there was a scheme code somewhere but I didn't understand
>> how it worked. Thank you
>
> Yes, just give it a trace atomspace and all inferences will be dumped
> there, see (help cog-bc).
>
> In order to understand how to use it you may study the following example
>
> https://github.com/opencog/pln/tree/master/examples/pln/inference-control-meta-learning
>
>
> In fact one excellent goal for your project could be to learn control
> rules to speed up problem solving.  So basically you would
>
> 1. Run your problem with varying levels of difficulties, collecting
> inference traces.
> 2. Mine the inference traces to discover control rules.
> 3. Pass these control rules to the URE to hopefully speed up problem
> solving for the next rounds.
>
> I was only able to achieve that for the trivial alphabetic problem in
> the link above.  Making it work for a less trivial problem would be
> awesome.

It's BTW totally publishable material, the only reason I didn't publish
is because I consider the alphabetic problem to be too trivial, and then
I had to move on to other things and didn't have time to try on less
trivial problems. If you can achieve that on the worldsblock problem,
we can write a paper about it.

Nil

Nil Geisweiller

unread,
May 28, 2021, 3:10:10 AM5/28/21
to ope...@googlegroups.com, Michele Thiella


On 5/28/21 10:07 AM, Nil Geisweiller wrote:
> It's BTW totally publishable material, the only reason I didn't publish
> is because I consider the alphabetic problem to be too trivial, and then
> I had to move on to other things and didn't have time to try on less
> trivial problems.  If you can achieve that on the worldsblock problem,
> we can write a paper about it.

Some more info on the subject

https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827
https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/

Nil

Nil Geisweiller

unread,
May 28, 2021, 3:20:07 AM5/28/21
to ope...@googlegroups.com, Michele Thiella
On 5/28/21 10:11 AM, Nil Geisweiller wrote:
>
>
> On 5/28/21 10:07 AM, Nil Geisweiller wrote:
>> It's BTW totally publishable material, the only reason I didn't
>> publish is because I consider the alphabetic problem to be too
>> trivial, and then I had to move on to other things and didn't have
>> time to try on less trivial problems.  If you can achieve that on the
>> worldsblock problem, we can write a paper about it.
>
> Some more info on the subject
>
> https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827
>
> https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/

Here are two publications describing the pattern miner and the
planner/controller that the inference meta learning system uses

https://github.com/ngeiswei/papers/blob/master/MineSurprisingPatterns/MineSurprisingPatterns.pdf
https://github.com/ngeiswei/papers/blob/master/PartialBetaOperatorInduction/PartialBetaOperatorInduction.pdf

Nil

Michele Thiella

unread,
Jul 20, 2021, 10:13:34 AM7/20/21
to opencog
Hi everyone, sorry if I disappeared, I had to complete the exams.
In the next days, I will retrieve your latest tips and try to use states and transitions between states.

Michele Thiella

unread,
Jul 20, 2021, 10:46:35 AM7/20/21
to opencog
Since some details on my thesis have changed, 
maybe I first try to formalize the problem and a line of thought to solve it. 
Then, write some comments and text to share with you.
In order to understand even if it is feasible to do "learn control rules to speed up problem solving", 
with my deadlines (thank you in advance Nil for the publication proposal)

Michele

Nil Geisweiller

unread,
Jul 27, 2021, 4:36:40 AM7/27/21
to ope...@googlegroups.com, Michele Thiella
Yeah, you should probably come up yourself with control rules that would
speed up problem solving for your test case, then evaluate if the
current system (used to speed the alphabetic problem) would be able to
learn it (there are some technical limitations that might make it hard
to learn, the main one being that the pattern miner as currently written
can only learn syntactic abstractions, this can be worked around in
various ways but it requires care).

So once you have your problem and your optimal (or at least better than
average) control rule set, I can help you to evaluate if the current
system can handle it.

Nil
> <https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827>
>
> >
> >
> https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/
> <https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/>
>
>
> Here are two publications describing the pattern miner and the
> planner/controller that the inference meta learning system uses
>
> https://github.com/ngeiswei/papers/blob/master/MineSurprisingPatterns/MineSurprisingPatterns.pdf
> <https://github.com/ngeiswei/papers/blob/master/MineSurprisingPatterns/MineSurprisingPatterns.pdf>
>
> https://github.com/ngeiswei/papers/blob/master/PartialBetaOperatorInduction/PartialBetaOperatorInduction.pdf
> <https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/4f5a4761-3ad7-4d6c-87d6-f5bb1f16f993n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>     --     Patrick: Are they laughing at us?
> >>>>     Sponge Bob: No, Patrick, they are laughing next to us.
> >>>>
> >>>>
> >>>> --
> >>>> You received this message because you are subscribed to
> the Google
> >>>> Groups "opencog" group.
> >>>> To unsubscribe from this group and stop receiving emails
> from it,
> >>>> send an email to opencog+u...@googlegroups.com
> >>>> <mailto:opencog+u...@googlegroups.com>.
> >>>> To view this discussion on the web visit
> >>>>
> https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com>
>
> >>>>
> <https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com?utm_medium=email&utm_source=footer
> <https://groups.google.com/d/msgid/opencog/bcebe289-87a8-4290-8ea3-66644ae8b30fn%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> >>>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/74f682e0-c48c-42e7-836a-6546db42cccdn%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/74f682e0-c48c-42e7-836a-6546db42cccdn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Michele Thiella

unread,
Aug 14, 2021, 10:36:17 AM8/14/21
to opencog
Hello everyone,
I will try to explain in a simple way:

1) my problem and my goal
2) the possible solutions
3) errors/shortcomings found and extra questions encountered along the way


1) The Problem:

Let's start from scratch. My problem is based on the classic problem called "blocksworld problem". That is:
 
- there is a robot manipulator that has 4 actions available:
pickup, putdown, stack, unstack.

- there are blocks on a table

- there is a goal to be achieved

The Goal: 
I am trying to solve any possible arrangement of the blocks. So my work aims to take as input a final arrangement of the blocks and 
through backward inference, obtain the derivation tree to reach that arrangement, through the 4 actions mentioned above.
(I'll explain better later)


The construction of the problem:

- each block can be "clear", ie. the robot can take it
(it is not clear to me if the vice versa "not-clear" is also necessary)

- the robot hand may be "busy": so it is holding a block. Or "free": it has nothing in its hand

- the 4 actions:

1) pickup:
     - preconditions: "clear" block, "on-table" block and "free" (robot) hand
     - effects: "not-clear" block, "in-hand" block and "busy" hand

2) putdown:
     - preconditions: "not-clear" block, "in-hand" block and "busy" hand
     - effects: "clear" block, "on-table" block and "free" hand

3) stack:
     - preconditions: block1 "in-hand", block2 "clear" and "busy" hand
     - effects: block2 "not-clear", block1 "on" block2, block1 "clear" and hand "free"

4) unstack:
     - preconditions: block2 "not-clear", block1 "on" block2, block1 "clear" and hand "free"
     - effects: block1 "in-hand", block2 "clear" and "busy" hand

Basically the 4 actions mirror physics.
Eg. If I want to take a block from the table, the block must be free ("clear") and my hand must be free.
If block A is "on" block B then I can "unstack" block A and then make block B "clear" and having block A in hand.

Obviously the pickup action is the opposite of putdown and are used to take/place a block from/on the table.
The stack action is the opposite of unstack and are used to put/take a block on/from another block.

I hope the introduction to the problem is complete enough.

2) Implementation: (note that I'm looking for an Atomese-pure implementation)

- Initial Set in the atomspace: 
An external algorithm detects all the blocks present on the table 
(for now the initial arrangement of the blocks does not have any blocks on top of another, as the detection of the blocks is done through Apriltag 
and therefore I would not be able to find the blocks placed under others.
If I have time I will solve this problem using PointCloud.
This is to say that my initial block arrangement can be any.
Eg. 4 blocks:
 - A on B on C, D on table
 - A on D, B on C
 - A, B, C, D on table
 - and so on ...
)

So my atomspace will be about:

(SetLink
    ; block1
    (InheritanceLink (stv 1 1)
        (ConceptNode "block1")
        (ConceptNode "object"))

    (EvaluationLink (stv 1 1)
        (PredicateNode "clear")
        (ConceptNode "block1"))

    ; block2 
    ; ....
    
    ; differentiate the various blocks
    (NotLink (EqualLink (ConceptNode "block1") (ConceptNode "block2")))
)

- Goal Implementation:
it completely depends on how the model is formulated.
If you look for a state resolution (finite state machine type) the goal will be formulated as one of them.
Alternative: in the end, each block will always be on top of something (table or other block) so a possible goal formulation would be like:

(define (compute)
   (define goal-state
      (AndLink
         (ListLink
            (VariableNode "$ A")
            (VariableNode "$ B")
         )
         (ListLink
            (VariableNode "$ B")
            (VariableNode "$ C")
         )
         (NotLink (EqualLink (VariableNode "$ A") (VariableNode "$ B")))
         (NotLink (EqualLink (VariableNode "$ A") (VariableNode "$ C")))
         (NotLink (EqualLink (VariableNode "$ B") (VariableNode "$ C")))
      )
   )
   (define vardecl
      (VariableList
         (TypedVariableLink
            (VariableNode "$ A")
            (TypeNode "ConceptNode"))
         (TypedVariableLink
            (VariableNode "$ B")
            (TypeNode "ConceptNode"))
         (TypedVariableLink
            (VariableNode "$ C")
            (TypeNode "ConceptNode"))
         (TypedVariableLink
            (VariableNode "$ D")
            (TypeNode "ConceptNode"))
      )
   )
   (cog-bc rbs goal-state #: vardecl vardecl)
)

- Rules for inference:
Same considerations made for the formulation of the goal.
Let's start with the rules corresponding to the 4 robot actions and leave out extra rules.
If we rely on the definition above, then for example the stack rule would be something like:

(define stack
   (BindLink
      (VariableList
         (TypedVariableLink (VariableNode "?ob") (TypeNode "ConceptNode"))
         (TypedVariableLink (VariableNode "?underob") (TypeNode "ConceptNode"))
      ) ; parameters
      (PresentLink
         (NotLink
            (EqualLink (VariableNode "?ob") (VariableNode "?underob")))
         (InheritanceLink
            (VariableNode "?ob")
            (ConceptNode "object"))
         (InheritanceLink
            (VariableNode "?underob")
            (ConceptNode "object"))
         (AndLink
            (EvaluationLink
               (PredicateNode "in-hand")
               (VariableNode "?ob"))
            (EvaluationLink
               (PredicateNode "clear")
               (VariableNode "?underob"))
         )
      )
      (ExecutionOutputLink
         (GroundedSchemaNode "scm: stack-action")
         (ListLink
            ; effect:              this represent ?ob "on" ?underob
            (ListLink
               (VariableNode "?ob")
               (VariableNode "?underob")
            )
            ; precondition
            (AndLink
               (EvaluationLink
                  (PredicateNode "in-hand")
                  (VariableNode "?ob"))
               (EvaluationLink
                  (PredicateNode "clear")
                  (VariableNode "?underob"))
            )
         )
      )
   )
)


3) Before talking about the problems that this writing (and the state-based alternative) has, I would like to talk about backward inference.

Probably the implementation and functioning of URE is my biggest shortcoming 
and also the reason why I don't find the right way to formulate and solve this problem. Some questions:

3.1) I've always seen backward inference work via BindLink and VariableNode. I have no idea if there is an alternative/better way to do it.

3.2) As Linas mentioned, BindLink requires PresentLink and this is one of the biggest problems. 
By backward inference the rules are called and combine into a large BindLink and the same is true for the PresentLink. 
In the end, you get a large PresentLink made up of all the PresentLinks of the called rules.
This means that for example I cannot use atoms like

; atom [0]
(EvaluationLink
               (PredicateNode "clear")
               (VariableNode "? Ob"))
; atom [1]
(EvaluationLink
               (PredicateNode "not-clear")
               (VariableNode "? Ob"))

because it doesn't make sense that the same block is both "clear" and "not-clear".

----------------------
PS. this leads to another question: is what I am saying correct? I'll explain:
Suppose I have 2 rules. One has the atom [0] in the PresentLink and the other has the atom [1].
Suppose the rules are called in succession from backward inference.
When is PresentLink evaluated? From what I've seen:

1) the two rules compose the new BindLink, containing the PresentLink of both (which I think is the "Expanded forward chainer strategy")
2) The BindLink is evaluated and then the solutions are found or not (which I think is the "Selected and-BIT for fulfillment")

Then, only at the end, the PresentLink is evaluated, this implies that both atoms [0] and [1] must be present together in the atomspace.

This is incorrect: "The PresentLink of each rule is evaluated when that rule is called." Right?
----------------------

That said, it wouldn't seem like a problem. Instead it is, 
because it means that once the rule writes a new atom into the atomspace 
then that atom will always be present and therefore the rule that uses that atom as a precondition can be called whenever it wants.
Consequently , in example:

- blocks A, B, C
- initial arrangement: A "on" B, C on the table
- goal: Variable ?ob "on" Variable ?underob

Consequently, for example, the use of certain atoms is no longer good for trying to follow the physics of actions 
(eg hand- "busy" and hand- "free": I can only take an object if my hand is free). 
The two atoms will always appear in the PresentLink and therefore, after doing a "pickup" and a "putdown", 
I can do two "pickups" in a row without worrying about having to put the object down first.
So, you don't understand anything.
But essentially the presence of certain atoms to limit the solutions to only physically correct sequences of actions does not work (or at least I have not been able to find a logic that fits).


3.3) Mirror problem with unstack rule:

First let's take a step back: 

- blocks A, B, C
- initial arrangement: A, B, C on the table
- goal:
            (AndLink
               (ListLink
                  (VariableNode "?ob")
                  (VariableNode "?underob")
               )
               (NotLink (EqualLink (VariableNode "?ob") (VariableNode "?underob")))
            )


Backward inference could call the following rules in order: (conjunction joins two Links in a AndLink)

(goal) <- conjunction <- stack <- conjunction <- pickup <- (init-set)



(EvaluationLink (PredicateNode "clear")(VariableNode "?ob"))
----------------------------------------pickup-action----------------------------------------
(EvaluationLink (PredicateNode "in-hand") (VariableNode "?ob"))                                                 (EvaluationLink (PredicateNode "clear")(VariableNode "?underob"))
==========================================================conjunction============================================================
                                                                                                 (AndLink
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "in-hand")
                                                                                                         (VariableNode "?ob"))
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "clear")
                                                                                                         (VariableNode "?underob"))
                                                                                                 )
------------------------------------------------------------------------------------------------------- stack-action ----------------------------------
                                                                                                 (ListLink
                                                                                                    (VariableNode "?ob")                                             
                                                                                                    (VariableNode "?underob")                                                               (NotLink (EqualLink (VariableNode "?ob") (VariableNode "?underob")))
==========================================================conjunction=========================================================================================
                                                                                                             (AndLink
                                                                                                                (ListLink
                                                                                                                   (VariableNode "?ob")
                                                                                                                   (VariableNode "?underob")
                                                                                                                )
                                                                                                                (NotLink (EqualLink (VariableNode "?ob") (VariableNode "?underob")))
                                                                                                             )


and returns as a solution all the combinations of the 3 blocks one above the other two by two. 
This is great, but analyzing the rules, then "unstack" would be of the form:


                                                                                                 (ListLink
                                                                                                    (VariableNode "?ob")
                                                                                                    (VariableNode "?underob")     
----------------------------------------------------------------------------------------------------------------- unstack-action -----------------------------------------------------------------------------------------------------------------
                                                                                                 (AndLink
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "in-hand")
                                                                                                         (VariableNode "?ob"))
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "clear")
                                                                                                         (VariableNode "?underob"))
                                                                                                 )


and now the trouble begins, because, as for the conjunction rule used for stack, then I need a disjunction for unstack rule, 

                                                                                                 (AndLink
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "in-hand")
                                                                                                         (VariableNode "?ob"))
                                                                                                      (EvaluationLink
                                                                                                         (PredicateNode "clear")
                                                                                                         (VariableNode "?underob"))
                                                                                                 )
==========================================================disjunction============================================================
(EvaluationLink (PredicateNode "in-hand") (VariableNode "?ob"))                                                 (EvaluationLink (PredicateNode "clear")(VariableNode "?underob"))


Which from what I know is not possible to have because there is always a single atom as an effect and a single atom as a precondition.
But there should be something like the composition rule:

Γ′⊢ψ                      Γ, ψ, Γ ”⊢ ∇
--------------------------------------------------
           Γ, Γ ′, Γ′′⊢ ∇




3.4) Finally, the last and I think the most important question: let's try to work by states.
Well, I have tried many ways and I have not succeeded in any.
Basically I found some shortcomings rather than logical errors.

As has been said, the number of states for this problem is large to have them all in the atomspace (especially if we use a lot of blocks) and a waste because, based on the goal, 3/4 of the states would be useless.

So there are 2 ideas (always in Atomese-pure):

1) Find a rule that takes in (precondition) a state and an action and returns (effect) a new state.

2) Find 4 rules (one for each action) that take in (precondition) a state and return (effect) a new one.

So, first of all:

- I could not give as a precondition: the last state created.
The preconditions and effects of the rules are non-generic atoms. The only possibility I had thought was to have the input state as VariableNode, so that with fulfillment it would try all the atoms that represented my states.
But this is not good because maybe after n actions, instead of taking the n-th state and creating the n + 1-th state, it could take the i-th state and create the n + 1-th state. And of course it is wrong because the i-th state is old and the layout of the blocks has certainly changed. (I hope it's clear enough)

This led me to think that StateLink was a good atom for this purpose.
- StateLink is unique, so it's fine as a precondition of my rule because it will definitely always represent the current situation of my blocks.
Yet when I get a sequence of states as a solution to my inference, then in the PresentLink of my final BindLink all these states are required to be present in the atomspace. And this does not work (always confirming my initial assumption that the presence in the atomspace of the atoms contained in the PresentLink is verified at the time of fulfillment and not at the call of each rule), because all the StateLinks prior to the last one no longer exist, for StateLink definition.

- I tried associating a Floats Value to the StateLink to represent the state of each block, so for example for each block one bit for "clear" / "not-clear", one bit for "in-hand" / "not-in -hand ", etc ...
The idea was to change the status bits of an object as a rule was called on that object.
 I guess that's not good because:
   - either the bits of the Value are the precondition and the effect of the rule, or the inference does not perceive their change during the calls of the various rules (if for example the flips of the bits occur in the GroundedSchemaNode)
   - even if the bits of the Value were the precondition and the effect of the rule, there would still be the PresentLink problem. So once I have created the "can-pickup" state of block A, it will always be usable because it is inserted in the atomspace, even when A is no longer "pickable".





4) Conclusions:
I think something is missing from the current system to solve this problem (or I need some advice because I can't do it in any way)

- The idea is a StateLink which however does not delete its old state but which keeps it in the atomspace. But somehow it can be called generically as a precondition of the rules, and this generic call always refers to the last StateLink created. (I saw that there was an obsolete atom: LatestLink, which maybe took over part of this operation)
  
So the operation would be (call this new atom LatestStateLink):

(define choose-action
   (BindLink
      (VariableList
         (TypedVariableLink (VariableNode "?ob") (TypeNode "ConceptNode"))
      ) 
      (PresentLink
         (InheritanceLink
            (VariableNode "?ob")
            (ConceptNode "object"))
         
         (LatestStateLink "actual_state"
               (ListLink (ConceptNode "?ob") (PredicateNode "state"))
               (FloatValue 0 1 0 .....)
            )
      
      )
      (ExecutionOutputLink
         (GroundedSchemaNode "scm: action")
         (ListLink
            ; effect:             
            (LatestStateLink "actual_state"
               (ListLink (ConceptNode "?ob") (PredicateNode "state"))
               (FloatValue 1 1 1 .....)
            )
            ; precondition
            (LatestStateLink "actual_state"
               (ListLink (ConceptNode "?ob") (PredicateNode "state"))
               (FloatValue 0 1 0 .....)
            )
         )
      )
   )
)

This is very similar to StateLink except for the name given to LatestStateLink. The idea is that the precondition for this rule is to check only the last state relative to the ?ob block and not the previous ones as well. If the last state, which I named "actual_state", has the FloatValue ​​corresponding to the required ones then the rule can be called, otherwise not.

When the rule is called the effect is written on the atomspace and then a new LatestStateLink "actual_state" is added and the previous LatestStateLink is left in the atomspace losing the name (so that you have one and only one "actual_state").

By doing this, it is possible to write rules in a generic way that respect the physics of actions and function in states.
it's just a draft it will probably have other errors but it was one of the ideas that came to me.

Unfortunately I haven't even looked at the C ++ implementation part of the Atom and their types. So for "code additions" of this type I think I don't have the time to get by, understand how the C ++ part works and write the code correctly and completely.


This is all I have managed to write. I'm sorry it's so long and I apologize for the many unclear parts and logical and grammatical errors.
For those who like it, happy reading!

Michele

Michele Thiella

unread,
Aug 16, 2021, 7:14:38 AM8/16/21
to opencog
Before analyzing and answering what I have written, I am trying a different approach, based on the examples of FSM. 
Probably, there are big conceptual errors in my previous post, 
so before we get lost in talking about that, maybe I'll try this new approach first.

Michele

Michele Thiella

unread,
Aug 21, 2021, 8:14:45 AM8/21/21
to opencog
Ok this is a first draft, the file is Blocksworld_FSM.scm (in the branch "restart_master") and the README explains almost everything (skip implementation 1 as I still have to upload the file and look at implementation 2). https://github.com/raschild6/blocksworld_problem/tree/restart_master

Nil Geisweiller

unread,
Aug 24, 2021, 8:40:04 AM8/24/21
to ope...@googlegroups.com, Michele Thiella
Hi Michele,

I took a brief look at your work, and I don't think that's how it should
be handled. Please consider that the AtomSpace is best used as an
immutable data store, especially when it comes to reasoning. Thus you
should think of your problem as an immutable graph that you need to
travel/unfold according to URE rules.

With that in mind, the URE query handed to the backward chainer then
would be something like

initial-state x action-sequence -> final-state

where the action-sequence is the variable that the backward chainer must
fill in. You may choose the format of such relationships and states as
you like.

For instance you could

1. use time (wrap evaluations with AtTimeLink and do temporal reasoning)
2. or describe a state explicitly (say as a list of attribute states of
holding, clear, etc), and have each evaluation take that state in
addition to its arguments (to be, again, immutable).

I feel it's probably best if we schedule a call. I'm available this and
likely next week (I'll probably be on vacation sometime in Sept but I
don't know when).

Nil

On 8/21/21 3:14 PM, Michele Thiella wrote:
> Ok this is a first draft, the file is Blocksworld_FSM.scm (in the branch
> "restart_master") and the README explains almost everything (skip
> implementation 1 as I still have to upload the file and look at
> implementation 2).
> https://github.com/raschild6/blocksworld_problem/tree/restart_master
> <https://github.com/raschild6/blocksworld_problem/tree/restart_master>
>
> Il giorno lunedì 16 agosto 2021 alle 13:14:38 UTC+2 Michele Thiella ha
> scritto:
>
> Before analyzing and answering what I have written, I am trying a
> different approach, based on the examples of FSM.
> Probably, there are big conceptual errors in my previous post,
> so before we get lost in talking about that, maybe I'll try this new
> approach first.
>
> Michele
> Il giorno sabato 14 agosto 2021 alle 16:36:17 UTC+2 Michele Thiella
> ha scritto:
>
> Hello everyone,
> I will try to explain in a simple way:
>
> 1) my problem and my goal
> 2) the possible solutions
> 3) errors/shortcomings found and extra questions encountered
> along the way
>
>
> *1) The Problem:*
>
> Let's start from scratch. My problem is based on the classic
> problem called "blocksworld problem". That is:
> - there is a robot manipulator that has 4 actions available:
> pickup, putdown, stack, unstack.
>
> - there are blocks on a table
>
> - there is a goal to be achieved
>
> *The Goal: *
> I am trying to solve any possible arrangement of the blocks. So
> my work aims to take as input a final arrangement of the blocks and
> through backward inference, obtain the derivation tree to reach
> that arrangement, through the 4 actions mentioned above.
> (I'll explain better later)
>
>
> *The construction of the problem:*
> *2) Implementation:* (note that I'm looking for an Atomese-pure
> *3)* Before talking about the problems that this writing (and
> *4) Conclusions:*
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/da3335d2-8255-4bea-aa1d-bef8823e7378n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/da3335d2-8255-4bea-aa1d-bef8823e7378n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Linas Vepstas

unread,
Aug 26, 2021, 12:40:37 PM8/26/21
to opencog
I've been travelling, and will try to read and write a response "real soon now" (next few days).

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Michele Thiella

unread,
Aug 27, 2021, 5:57:27 AM8/27/21
to opencog
Hello everyone and thanks for your time,
unfortunately i almost out of time due to graduation deadlines...

As Nil reminds me, atomspace is used as immutable data storage. I always knew this but I realized late what it means on a practical level... 
I tried to use it dynamically, adding and removing atoms during the reasoning. I think this is the biggest mistake that didn't allow me to get to a solution with backwards chaining.

Responding to Nil's proposals:

1. I think temporal reasoning is the most correct way .. and it would come close to the ROCCA model (the one you presented in a meeting some time ago), if I'm not mistaken.
2. I tried some solutions describing each state explicitly but (now I don't remember well) the problem was the lack of generality of the rules (probably my lack of knowledge didn't allow me to generalize them properly) ...
As I write this, new solutions are coming to my mind and I would really like to have the time to try them all.
Anyway, I don't know why, I never considered inferring action rather than cubes... although, it's logically obvious!

In conclusion, a few days ago I talked to Adrian Borucki (I hope I'm not wrong) and I tried a step-by-step approach... so, I run one rule at a time and the effects are applied in the atomspace.
Ok.. it works, of course.
I wrote breadth-first tree expansion algorithm, which starts from the initial arrangement and tries all available actions, for each result it creates a new node of the tree (and apply the effect of that action in its atomspace) and repeats until reaching the goal.

I don't think it's the correct way to use the atomspace but with the rules I had written, my previous knowledge and lack of time, I couldn't do better.

Unfortunately, I rely on the university's private Ros codes for the first part of the project, which I cannot disclose. 
Within the next 2 weeks (or a little more) I should be able to replace them with a fake code and therefore makes everything open source

For now I think to keep this way... but after graduation, maybe I will implement one of the correct approaches!
It has become a personal challenge!
 
Thanks again for your availability!

Michele

Linas Vepstas

unread,
Aug 27, 2021, 11:51:46 AM8/27/21
to opencog
Hi Michele!

A quick reply to your last email.

-- What Adrian said: yes, one should always begin by running rules one at a time, by hand, to see what happens, after each one is run.

-- "immutable" is not quite the right word. The atomspace is a "blackboard": you can write on it, and you can erase (portions of) it, but you cannot change what is written on it, without erasing first.

-- Some AI textbooks use the word "blackboard".  This is the same thing.

-- Given what you wrote below, it sounds like you had state management problems.  For example, while stacking blocks, perhaps there were some other left-over stacks from earlier attempts, that ruined the logic? Perhaps you stored info that said the robot arm is both empty, and holding something? Of course, it can't be both, but the atomspace doesn't know, and you have to manage that state, explicitly.

-- There are three ways to do this: with StateLink, by pushing/popping atomspaces, and "ad hoc".

-- The StateLink is a tool for atomic erase-and-write.  For example

(State (Concept "robot arm") (Concept "empty"))

can denote an arm that is not holding anything.   Later on, if you create the atom

(State (Concept "robot arm") (Evaluation (Predicate "holding") (Concept "block 42")))

then the first StateLink will be automatically removed. (and it will be updated atomically: any thread will see either the old link, or the new link, and will never see both, and will never see neither.)

I did all of the Hanson Robotics code using StateLink.  It was used to represent anything that modelled the external world, and was changing in time -- who the robot was looking at, what the robot was doing (smiling, frowning.,..), affective state (content, anxious, ...), the current topic of conversation, the previous sentence...  You can use it to hold not only the state of the arm, but also the arrangement of the blocks on the table.

-- AtomSpace push and pop. The idea here is to create temporary "scratch" atomspaces, write stuff into them, and then throw them away by popping them off the stack. Ideal for recursive algorithms.  For example:

; Initial atomspace contains four blocks on table and empty robot arm.
(cog-push-atomspace)
; Run rule to pick up block. Atomspace now has three blocks on table, and arm holding block A.
(cog-push-atomspace)
; Run rule to place block A on top of block B. robot arm is now empty again.
; print "success!" to output
(cog-pop-atomspace)
; Popping the atomspace is like going backwards in time...
(cog-pop-atomspace)
; After the second pop, we are now back to the initial state, of unstacked blocks and empty arm.

The above works fine, in general. However:
1) StateLink may be buggy with push-pop.  No one has ever used them together!!
2) The URE (and PLN) does NOT use push/pop during rule-application/reasoning (?? Not sure.. right, Nil??) Thus, if you apply some rule, it changes the contents of the atomspace, and there is no easy way to go "backwards in time" and pretend the rule was never applied. This is particularly important during reasoning, when rules may have side-effects that affect the state !!! Without push-pop, there is no way to undo the state change!

I guess that's all.  A few words about the general idea.

** Each atomspace, after a push or pop, is formally called a "Kripke frame" -- it is a "what if" model of the "current universe".  The concept of Kripke frames allows "modal logic" to be used in understanding things. For example, "what if I stack block B on top of block A?"  -- given that hypothetical universe, you can then explore further: "suppose I put block C on block B, then what?"

** Besides push and pop, you can navigate to different atomspaces, and so, in general, there will be a lattice of possible worlds, and you can take a birds-eye view of all of those worlds (and perform reasoning on them).

That's the general idea. There may be bugs and usability issues.
a) StateLink might work badly with the push-pop
b) Truth values may work badly with push-pop. I think we fixed this once, but there might be bugs.
c) URE and PLN mostly don't take advantage of push-pop, and thus, if you have a rule that has side-effects, they are not isolated. That is, the URE is not "hygenic". (for the schemers reading this: the URE is like a macro system...)

I'll take a look at a) and b) shortly.

I am sorry you are running out of time. I'm not sure how to best spend the time remaining. Probably the best thing to do is what Adrian suggested: make sure that you can apply rules, one at a time, by hand, and that you get the expected results.  At least, that way, you get a collection of rules that "work", and you'd be missing a chainer for them.  Automatically chaining them would then be some other, future step.

--linas




Michele Thiella

unread,
Aug 27, 2021, 12:58:11 PM8/27/21
to opencog
ok I try to add something and explain better.

- I tried to use StateLink but I probably didn't understand how to fit it with backward chaining. In the sense that, from what I understand, backward chaining creates a BindLink (or QueryLink) composed of several rules that are chained together since the precondition of one rule corresponds to the effect of another.
The conditional part will also be formed by the union of the conditional parts of these rules.

So in the conditional parts I can't verify the presence or absence of StateLink. Because

The conditional part is checked when the BindLink is called, before any rule contained in it is executed.
 The conditional part is made up of all the conditional parts of the rules called within the BindLink. So I can't use StateLink as preconditions (or effects) of the rules.

Because if it were used as an effect, by concatenation, it would have to coincide with the precondition of another rule. But then, if it were a precondition it would appear in the conditional part inside the PresentLink, which would be checked before the rules are executed.
For example I want to UNSTACK and then PUTDOWN an object.
For PUTDOWN I ask as a precondition a StateLink that allows me to execute a putdown.
For UNSTACK I ask as a precondition a StateLink that allows me to perform an unstack.
(therefore putdown precondition is admissible as an unstack effect)
If these two rules are back-chained together, then their PresentLinks require their respective preconditions to be in the atomspace and these PresentLinks will both be evaluated before any of the two rules are actually called, returning false.

That said, these are probably not the ideas for using StateLink.

I'd like to discuss it more later ..



- pushing / popping atomspaces:
that's essentially what I'm doing now.
Model-based rules all work well. So I copy the current atomspace into a "temporary" one, I do for example PICKUP and get all the possible cubes to take. For each solution, I create an atomspace and apply it and finally I clean and delete the "temporary" atomspace, and continue like this by creating the tree and exploring all the possible sequences of actions.
Furthermore, for each new atomspace created, I compare it with all those belonging to its branch and if one coincides (therefore they have the same arrangement of the blocks) then I finish the expansion of that node because it is cyclical and useless. By doing a breadth first exploration I get the best possible path from the initial to the goal arrangement.
The algorithm is a bit heavy and the tree explodes quickly but it is conceptually correct and working.

I don't think I understand the application of modal logic (for lack of knowledge I think)

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Finally, now I'm trying to create the goal via NLP.
So, I write the goal in English, create the sentence, do the parse and get the atoms with parse-get-r2l-outputs.
But following one of the examples of the link: https://wiki.opencog.org/w/RelEx2Logic_representation

"The book is on the table." not create the atom 
(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa") 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

Or rather, it creates it. But the ListLink contains only the table ... why?

Michele Thiella

unread,
Aug 27, 2021, 1:01:43 PM8/27/21
to opencog
EDIT:

"The book is on the table." not create the atom 
(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa") 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

Or rather, it creates it. But the ListLink contains only the table ... why?

I correct it:
It creates two separate atoms: 
(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

why?

Michele Thiella

unread,
Aug 27, 2021, 1:04:39 PM8/27/21
to opencog
Sorry again, how careless!
I meant that one atom contains table and the other contains book!

Linas Vepstas

unread,
Aug 27, 2021, 5:07:37 PM8/27/21
to opencog
Hi Michele,

On Fri, Aug 27, 2021 at 11:58 AM Michele Thiella <acikoa...@gmail.com> wrote:

- I tried to use StateLink
...
backward chaining
...

Yes, StateLink is useless with backwards-chaining.  It can only work with forward-chaining.
 
- pushing / popping atomspaces:
that's essentially what I'm doing now.
Model-based rules all work well. So I copy the current atomspace into a "temporary" one,

A minor performance note: using push/pop might be slightly faster, by avoiding a copy. (you can also manually push/pop with cog-new-atomspace, cog-set-atomspace! and stuff like that.)
 
The algorithm is a bit heavy and the tree explodes quickly but it is conceptually correct and working.

This is a generic problem with backwards-chaining: the algorithms are heavy, slow, and have combinatoric explosions.  This has been known since the 1980's and has been the subject of extensive academic research, and, no doubt, dozens of PhD thesis.

This is why I keep yabbering about answer-set programming (ASP) and the Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers, much or most or all of the combinatoric explosion can be avoided.  Or rather, the SAT solvers prune the graph in such a way that the explosion is avoided.

Exactly how to make use of this whiz-bang technology in the AtomSpace remains an open research question.
 
I don't think I understand the application of modal logic (for lack of knowledge I think)

Backwards chaining is a special case of modal logic.

Very roughly, modal logic is about reasoning over beliefs (If John believes X, then John should also believe Y" ... or rather "it is possible that John believes X, in which case, it is necessarily true that John believes Y") -- it is a form of reasoning over possible universes, where certain facts end up being necessarily true.

The backwards-chaining variant of this is "if block X is on top of block Y, then it is necessarily the case that Y is on top of the table or that Y is on top of Z" and backwards chaining is just "find all possible universes where block X is on top". (replace "John believes X" with "X is on top"; the "possible universes" are those where the block stack correctly.)

There was some GSOC summer-school effort to map the URE to modal logic, but it wasn't accomplished.


But following one of the examples of the link: https://wiki.opencog.org/w/RelEx2Logic_representation

"The book is on the table." not create the atom 
(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa") 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

Or rather, it creates it. But the ListLink contains only the table ... why?

I assume it's just a bug. The R2L code does not have an active maintainer. Open a bug report. I'll look at it. If it's real easy, I'll try to fix it. I hope it will be easy, and not some ugly mess.
 
-- Linas

Michele Thiella

unread,
Aug 27, 2021, 5:52:59 PM8/27/21
to opencog
Hi Linas, 

Yes, StateLink is useless with backwards-chaining.  It can only work with forward-chaining.

Oh yes! I understood this after several attempts. However, at least it's a closed path!
 

A minor performance note: using push/pop might be slightly faster, by avoiding a copy. (you can also manually push/pop with cog-new-atomspace, cog-set-atomspace! and stuff like that.)

I hadn't seen push and pop so i was using (cog-new-atomspace). Actually, the algorithm is in python so I do Atomspace () and for the copy I extract the atoms from one and rerun them in the other (probably with decreased performance).
 
This is a generic problem with backwards-chaining: the algorithms are heavy, slow, and have combinatoric explosions.  This has been known since the 1980's and has been the subject of extensive academic research, and, no doubt, dozens of PhD thesis.

I saw it in old tests.. Anyway, my algorithm is more or less a guided forward chaining, but going up or down the tree often makes no difference. 
 
This is why I keep yabbering about answer-set programming (ASP) and the Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers, much or most or all of the combinatoric explosion can be avoided.  Or rather, the SAT solvers prune the graph in such a way that the explosion is avoided.

Exactly how to make use of this whiz-bang technology in the AtomSpace remains an open research question. 
I don't think I understand the application of modal logic (for lack of knowledge I think)

Backwards chaining is a special case of modal logic.

Very roughly, modal logic is about reasoning over beliefs (If John believes X, then John should also believe Y" ... or rather "it is possible that John believes X, in which case, it is necessarily true that John believes Y") -- it is a form of reasoning over possible universes, where certain facts end up being necessarily true.

The backwards-chaining variant of this is "if block X is on top of block Y, then it is necessarily the case that Y is on top of the table or that Y is on top of Z" and backwards chaining is just "find all possible universes where block X is on top". (replace "John believes X" with "X is on top"; the "possible universes" are those where the block stack correctly.)

There was some GSOC summer-school effort to map the URE to modal logic, but it wasn't accomplished.

I've studied the theory of the SAT problem and some pseudocodes but I don't know most of ASP. When I have more time I will try to learn about it because solve the combinatorial explosions seems like a nice achievement.
 
But following one of the examples of the link: https://wiki.opencog.org/w/RelEx2Logic_representation

"The book is on the table." not create the atom 
(EvaluationLink
   (PredicateNode "on@903a1a18-124d-498d-97af-447277a798e5" ) 
   (ListLink 
      (ConceptNode "book@12357525-7ca9-4d5e-85f8-b565228459aa") 
      (ConceptNode "table@be0f51a3-a7a0-400e-80ea-9ca860928af4")))

Or rather, it creates it. But the ListLink contains only the table ... why?

I assume it's just a bug. The R2L code does not have an active maintainer. Open a bug report. I'll look at it. If it's real easy, I'll try to fix it. I hope it will be easy, and not some ugly mess.


Ok thanks a lot, first I have to understand how to do it!
I've always managed to report in words but the time has come!

Michele

Linas Vepstas

unread,
Aug 27, 2021, 6:49:51 PM8/27/21
to opencog
Hi Michele,

On Fri, Aug 27, 2021 at 4:53 PM Michele Thiella <acikoa...@gmail.com> wrote:

A minor performance note: using push/pop might be slightly faster, by avoiding a copy. (you can also manually push/pop with cog-new-atomspace, cog-set-atomspace! and stuff like that.)

I hadn't seen push and pop so i was using (cog-new-atomspace). Actually, the algorithm is in python so I do Atomspace () and for the copy I extract the atoms from one and rerun them in the other (probably with decreased performance).

For your problem, the performance impact is minor. It only matters when you have millions of Atoms.

The python API should allow layering: so

base_as = Atomspace ()
child_as = Atomspace (base_as)

will create child_as so that anything in base_as is visible in it, and all deletes/adds/state changes are confined only to the child, without changing the base. I think there is a demo of this, somewhere.
 
 
This is a generic problem with backwards-chaining: the algorithms are heavy, slow, and have combinatoric explosions.  This has been known since the 1980's and has been the subject of extensive academic research, and, no doubt, dozens of PhD thesis.

I saw it in old tests.. Anyway, my algorithm is more or less a guided forward chaining, but going up or down the tree often makes no difference. 

Because forward-chaining has exactly the same problem with combinatoric explosion!

 
This is why I keep yabbering about answer-set programming (ASP) and the Univ. Potsdam ASP solver. Because ASP uses a SAT solver under the covers, much or most or all of the combinatoric explosion can be avoided.  Or rather, the SAT solvers prune the graph in such a way that the explosion is avoided.

Exactly how to make use of this whiz-bang technology in the AtomSpace remains an open research question. 

I've studied the theory of the SAT problem and some pseudocodes but I don't know most of ASP. When I have more time I will try to learn about it because solve the combinatorial explosions seems like a nice achievement.

The only "advantage" that ASP offers on top of SAT is that it allows you to use prolog syntax. That is, it is almost exactly the same as prolog, but instead of chaining, it uses SAT "under the covers".

It's more powerful than traditional prolog alone, partly because it does "cut" automatically (thanks to SAT), and performs an exhaustive search only after pruning.  One can (easily) write general constraint-satisfaction problems in ASP, which is hard/impossible in prolog.

The problem with URE/PLN is that it is "impossible" to prune, because any probability that is greater than zero must be considered. Thus, the combinatoric explosion is unavoidable.  However...

There is something called "Morse Theory", of which an important application is "Floer Theory".

So: Morse theory, applied to probabilistic logic, would look like this:  pick a value p and consider all logical deductions for which the probability of a clause is greater than p.  So basically, you do probabilistic logic "just like always", but terminate inference whenever the probability is less than p.  As you vary p from 0 to one, fewer and fewer inferences can be made. Setting p=1 gives you just plain crisp-logic inference (which will still have combinatoric explosions, but fewer than the case of p=0.5, for example.)

As you vary p from 0.0 to 1.0, there will be special values where the number of inferences change. These are called "critical points".  These critical points are what you use to define the "Morse homology".  If there are two different inference paths to a given result, then you have a 1-simplex. If there are 3, then you have a 2-simplex, etc.  They are "homotopic" if there is a way of "continuously" deforming one inference path into another.  Continuity is with respect to the "Scott topology". The Scott topology just defines collections of logical statements that can be reached from one-another by single, individual steps. The HoTT book explains more.

The Floer homology is what you get, if you consider the manifold of all possible inferences as a function of all possible probability assignments to your inference rules.  This is effectively infinite-dimensional, but you can still write down a local "Lagrangian". The goal is now to apply Morse theory to this manifold.

After this point, my mind goes "bonk" and it is all very cloudy, but it seems "obvious" that almost all inferences are independent of small changes in the probabilities assigned to the rules. That is, inference is locally continuous, at almost all assignments of probability to the rules. There are only a small number of critical points, where the number of inferences can change.

(Again --- to be clear -- by "inference" I really do mean using PLN, or any other probabilistic system, even e.g. NARS or fuzzy logic, etc.)

To articulate the Morse-theory/Floer theory of probabilistic inference would require a lot of very abstract thinking and paper-n-pencil work. It is NOT coding. However, perhaps some nice theorems or some nice invariants pop out, and those could be used to create high-speed practical algorithms.  For example, mapping regions near critical points to a SAT solver, or something like that.  This is all completely new, utterly unexplored territory.

--linas

Linas Vepstas

unread,
Aug 27, 2021, 11:14:14 PM8/27/21
to opencog
On Fri, Aug 27, 2021 at 10:51 AM Linas Vepstas <linasv...@gmail.com> wrote:

1) StateLink may be buggy with push-pop. 


A basic (working) example will appear in

as soon as the pul req is merged.

--linas

Michele Thiella

unread,
Aug 30, 2021, 5:45:15 AM8/30/21
to opencog
Hi Linas,

I built lg-atomese and then rebuilt atomspace and opencog.
Is it normal that (nlp-parse ".....") doesn't work anymore?

guile> (nlp-parse "The book is on the table")
Error: Cannot connect to RelEx server: unbound-variable
(#f Unbound variable: ~S (LemmaLink) #f)
Backtrace:
           7 (apply-smob/1 #<catch-closure 561c08863de0>)
           6 (apply-smob/1 #<catch-closure 561c08863be0>)
In ice-9/boot-9.scm:
   2312:4  5 (save-module-excursion #<procedure 561c08859040 at ice-…>)
In ice-9/eval-string.scm:
     38:6  4 (read-and-eval #<input: string 561c08c728c0> #:lang _)
In opencog/nlp/chatbot/chat-utils.scm:
   197:16  3 (nlp-parse _)
In unknown file:
           2 (scm-error misc-error #f "~A" ("The RelEx server seem…") …)
In ice-9/boot-9.scm:
   751:25  1 (dispatch-exception 0 misc-error (#f "~A" ("The Rel…") …))
In unknown file:
           0 (apply-smob/1 #<catch-closure 561c08863ba0> misc-error # …)

ERROR: In procedure apply-smob/1:
The RelEx server seems to have crashed!
ABORT: misc-error


But no errors or crashes appear in the Relex server shell:

Info: Waiting for socket connection
Loop count=1 Restart count=0
Info: Waiting for socket connection
Info: Enter thread with handler 1
Info: hndlr=1 recv input: "The book is on the table"
Info: hndlr=1 sentence: "The book is on the table"
Link-parsing: 8 milliseconds (avg=8 millisecs, cnt=1)
RelEx processing: 75 milliseconds (avg=75 millisecs, cnt=1)
Info: hndlr=1 sent parse 1 of 1
Info: hndlr=1 Closed input socket

Michele Thiella

unread,
Aug 30, 2021, 6:56:17 AM8/30/21
to opencog
By the way, using lg-atomese as in your example:

(cog-execute!
   (LgParse
      (PhraseNode "The book is on the table.")
      (LgDictNode "en")
      (NumberNode 1)))

how would the Query be to get the following?

(EvaluationLink
  (PredicateNode "on")
  (ListLink
    (ConceptNode "book")
    (ConceptNode "table"))

Michele Thiella

unread,
Aug 30, 2021, 7:40:43 AM8/30/21
to opencog
Maybe I answered myself. 
Lg-atomese doesn't even do the "2logic" functionality so I'll never find a ConceptNode.
Instead, for relex2logic there is no longer backward compatibility, right?
I concluded by doing a revert commit where relex2logic still works as, for my simple requests, it was enough and it works.

Michele

Linas Vepstas

unread,
Sep 1, 2021, 6:18:33 PM9/1/21
to opencog
Sigh. Indeed, I get the same  crash

I can fix it by saying
(use-modules (opencog nlp oc))

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Linas Vepstas

unread,
Sep 1, 2021, 6:22:11 PM9/1/21
to opencog
You would have to assemble that query yourself, as you were already seemed to be doing.

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

Linas Vepstas

unread,
Sep 1, 2021, 6:36:37 PM9/1/21
to opencog
And a third reply:

On Mon, Aug 30, 2021 at 6:40 AM Michele Thiella <acikoa...@gmail.com> wrote:
Maybe I answered myself. 
Lg-atomese doesn't even do the "2logic" functionality

That's correct. It is an interface to link-grammar only, and nothing else.
 
Instead, for relex2logic there is no longer backward compatibility, right?

Everything should work as before, if you include (use-modules (opencog nlp oc)) which is the module that wraps up all the other pieces bits and parts that remain inside of opencog.

-- linas

Nil Geisweiller

unread,
Sep 21, 2021, 6:56:17 AM9/21/21
to ope...@googlegroups.com, Linas Vepstas
On 8/27/21 18:51, Linas Vepstas wrote:
> 2) The URE (and PLN) does NOT use push/pop during
> rule-application/reasoning (?? Not sure.. right, Nil??) Thus, if you
> apply some rule, it changes the contents of the atomspace, and there is
> no easy way to go "backwards in time" and pretend the rule was never
> applied. This is particularly important during reasoning, when rules may
> have side-effects that affect the state !!! Without push-pop, there is
> no way to undo the state change!

That is mostly correct. The backward chainer does use a child atomspace
though, but then the results are dumped in the parent atomespace. It's
not ideal. That is said, reasoning as currently done by URE/PLN is best
concieved as stateless computing, treating the underlying theory as
monotonic. That is knowledge is only added, never modified. Yes TVs
can are modified but their confidences only goes up, which is consistent
with such monotonicity.

That is why I'm saying, describe your problem in a stateless manner,
then let PLN unfold/discover that graph till it contains your solution.

Nil
> *1) The Problem:*
>
> Let's start from scratch. My problem is based on the classic
> problem called "blocksworld problem". That is:
> - there is a robot manipulator that has 4 actions available:
> pickup, putdown, stack, unstack.
>
> - there are blocks on a table
>
> - there is a goal to be achieved
>
> *The Goal: *
> I am trying to solve any possible arrangement of the blocks.
> So my work aims to take as input a final arrangement of the
> blocks and
> through backward inference, obtain the derivation tree to
> reach that arrangement, through the 4 actions mentioned above.
> (I'll explain better later)
>
>
> *The construction of the problem:*
>
> *2) Implementation:* (note that I'm looking for an
> *3)* Before talking about the problems that this writing
> *4) Conclusions:*
> <https://groups.google.com/d/msgid/opencog/c9d4af3c-3117-4ea0-8cd2-bb885f108e1cn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/01eedcde-0a95-4e4e-b0ad-65ffe98dca45n%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/01eedcde-0a95-4e4e-b0ad-65ffe98dca45n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
>
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA36PqFu1TmuV7ADX6Na81AmDRV%2BAeaLMrdxxKfToqxFLZg%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA36PqFu1TmuV7ADX6Na81AmDRV%2BAeaLMrdxxKfToqxFLZg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages