Quickest way to get a running Opencog/atomspace/PLN/URE/NLP instance running in 2022

133 views
Skip to first unread message

Reach

unread,
Aug 23, 2022, 12:29:13 PM8/23/22
to opencog
Every few months I come back to take an attempt at getting a simple chatbot running so i can do practical experiments to wrap my head around atomspace and attempting inference.

My first attempt was the old opencog chatbot. I ran into an alot of build failures and tried to work them out but ultimately gave up.

My second attempt, I tried the old chatbot-psi to see if I would have more luck.. Failures in building too..

Lastly I found Mark's noetic/ros-opencog dockerfile and got excited to try it, but it fails to build too.

So before I keep trying to forge ahead on paths that may be no longer viable, I thought I'd ask here:
What's the quickest way to get a running Opencog/atomspace/PLN/URE/NLP instance running in 2022? a prebuilt docker container from somewhere? a jupyter notebook?

Jacques Basaldúa

unread,
Aug 24, 2022, 6:21:06 AM8/24/22
to opencog
I am not qualified to answer this, but I will share my experience as a user who just played with the old opencog. Setting it up was very straightforward.

I just git cloned:

https://github.com/opencog/docker.git <repo>

Where <repo> is a better name than "docker".
You can find complete instructions in the markdowns, I just walk you through the minimum. 
Running a server is just three steps:

cd <repo>/opencog
./docker-build.sh -a
cd cogserver/
docker build --no-cache -t opencog/cogserver .
docker run -p 17001:17001 -it opencog/cogserver

Wait for the "Listening on port 17001" message.

Connecting to it is just one (assuming rlwrap is installed).

rlwrap telnet localhost 17001

My 2 cents.







--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/5cdab71f-0e88-45b0-97ad-0c0beb0486ffn%40googlegroups.com.

Linas Vepstas

unread,
Aug 24, 2022, 12:39:14 PM8/24/22
to opencog
-- Hi Reach,

I wish I could provide a simple answer but I cannot. So I will try to keep it short.

-- We made a deep and fundamental mistake in not tagging the docker containers with specific version tags.  You can work around this as follows: Pick one of the docker containers. Look at it's revision history. Perhaps it was last changed on 24 august 2018. Then ... create a new docker, with the distro appropriate for that date (so, ubuntu 16.04 or maybe 18.04) and then alter all git-clones to fetch the contents of the git repo as it was on 24 august 2018.  This feels hacky, and does require some fair amount of work, but should have a high probability of success.

-- You are seeing build failures because everyone has lost interest in maintaining the chatbots. This is because they were never awesome to begin with, and because, theoretically-speaking, they are more-or-less a theoretical dead-end (towards creating AGI).  They did shine a light on some interesting ideas, and if you were a heavy-hitting programmer with good theoretical chops, we could talk about that... but none of it is easy and all of it is time-consuming. If you just wanted a conventional chat-bot as a toy to play with, I assume the mainstream ones work great. So I assume you want something more than a mainstream toy ...

-- There is an effort, on discord, to revive the old blender animated robot head. But that is just the head, no one has attempted to  re-attach it to the chatbot.  I can send a discord invite if needed.

Build failures should be reported on github. There are three possible outcomes: (1) you'll be told that component xyz is obsolete and unsupported (told this almost surely by me), or that (2) there will be no response at all, just silence (no one else is listening and I'm overwhelmed), or (3) it will get fixed (almost surely by me).

I'm motivated to provide (3) but sometimes dispense 1 & 2. I keep saying "me", because Ben has pulled almost everyone from off of the projects here, and onto other projects.  There are half-a-dozen people kind-of-ish involved, they hang out on discord mostly, none are hacking on the chatbots.

I would *love* to have someone excited enough about all this to look it over, report bugs, fix bugs, and actively participate.

-- linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/5cdab71f-0e88-45b0-97ad-0c0beb0486ffn%40googlegroups.com.


--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 

Linas Vepstas

unread,
Aug 24, 2022, 12:51:13 PM8/24/22
to opencog
Hi Jacques,

Yes, this works great. However, it's just the bare-bones system; the chatbots are not enabled/configured. The robot head won't start up.

Also, it is not the "old opencog", despite all the hype about hyperon. Large and important parts of the subsystem are not only supported, but are in active development. I'm working hard on making certain subsystems much better. Others are involved too. I think that the "important" parts will live on, get ported/interoperable with hyperon if/when that ever matures. So please don't say "old" -- it's the real deal, its what actually works *right now*.

However, there actually are parts that are "old", abandoned, with no plans to revive or replace them, Those have been split off into distinct git repos, and are mostly clearly marked as old or abandoned.

--linas


Jacques Basaldúa

unread,
Aug 24, 2022, 3:09:44 PM8/24/22
to opencog
>>So please don't say "old" -- it's the real deal, its what actually works *right now*.

Sorry for that. I said I was not qualified and have just been playing with it. 

It was easy to get opencog running. With that, the curious user can go to the scheme interpreter by typing `scm` and import it like `(use-modules (opencog))` as described in: https://wiki.opencog.org/w/Getting_Started_with_Atoms_and_the_Scheme_Shell


Dustin Ezell

unread,
Aug 24, 2022, 9:52:33 PM8/24/22
to ope...@googlegroups.com
Perhaps I should just join discord. If Ben has moved folks to other tasks, where is the activity these days that was once here? I've noticed this list isn't as busy as it once was. 

Reach Me

unread,
Aug 27, 2022, 8:46:15 PM8/27/22
to ope...@googlegroups.com
Thanks Jacques for sharing your approach, that got me into an Opencog instance I can at least play around with.

Linas, I definitely periodically check your nlp diaries in version control on occasion to see how much I can understand about your progress.
I'm sure it's a mixed blessing, I see your name attached to so many things, which is amazing. It does probably also keep you busy, So I can appreciate the value of your time.

Ultimately, I'm very interested in AGI. However I'm about to have twins and my background is more in systems administration than mathematics, I'm trying to piece together what I can from what I read and watch. I feel for now and the foreseeable future that I'm going to be like a person watching cooking shows at home and trying recipes to build practical skills, but in the realm of AI/ML. I do think that Symbolic or Nuero-symbolic approaches are going to be a more useful path than the traditional ML that is packaged in so many classes today. Because the breakthroughs in Symbolic AI from the 80's and Neuro-symbolic approaches of today aren't really packaged for step-wise learning in modern formats, My thinking was that a "chatbot" would show me practical hands-on examples. I was hoping to tinker with basic NLP, scheme, truth tables, natural language logic word problems, atomspace/knowledge graph to store info, an extensible example of inference and usage of PLN/URE. 

I've been through the hands-on with Opencog wiki and the examples there are simplistic enough to make work, but harder to abstract to more complicated scenarios.
Like my hope was to say turn this sentence/inquiry into a representation in atomspace, and then try to infer the answer:
"Five people were eating apples, A finished before B, but behind C. D finished before E, but behind B. What was the finishing order?"

I think the vision/avatar/cloudcog/conversational dialog initiatives are all wonderful things, but I think they may be far beyond my grasp at this stage.

I'm open to learning resource suggestions if anyone has some, I'm currently starting to read "Artificial Intelligence: A Modern Approach" by Norvig and "The Scheme Programming Language" by Dybvig.

Linas Vepstas

unread,
Aug 29, 2022, 7:37:18 AM8/29/22
to opencog
On Sun, Aug 28, 2022 at 3:46 AM Reach Me <reac...@gmail.com> wrote:

I'm sure it's a mixed blessing, I see your name attached to so many things, which is amazing. It does probably also keep you busy, So I can appreciate the value of your time.

Thanks!  Yes, it keeps me busy, but I enjoy what I do, even if I sometimes complain about it!
 

Ultimately, I'm very interested in AGI. However I'm about to have twins and my background is more in systems administration than mathematics, I'm trying to piece together what I can from what I read and watch. I feel for now and the foreseeable future that I'm going to be like a person watching cooking shows at home and trying recipes to build practical skills, but in the realm of AI/ML. I do think that Symbolic or Nuero-symbolic approaches are going to be a more useful path than the traditional ML that is packaged in so many classes today. Because the breakthroughs in Symbolic AI from the 80's and Neuro-symbolic approaches of today aren't really packaged for step-wise learning in modern formats, My thinking was that a "chatbot" would show me practical hands-on examples. I was hoping to tinker with basic NLP, scheme, truth tables, natural language logic word problems, atomspace/knowledge graph to store info, an extensible example of inference and usage of PLN/URE. 

Cool! Yes, you should definitely explore and play around with different pieces/parts. Get familiar with the terrain, the landscape, what's where.

I've been through the hands-on with Opencog wiki and the examples there are simplistic enough to make work, but harder to abstract to more complicated scenarios.
Like my hope was to say turn this sentence/inquiry into a representation in atomspace, and then try to infer the answer:
"Five people were eating apples, A finished before B, but behind C. D finished before E, but behind B. What was the finishing order?"

You are not the only one with such a hope, and there's a grand lesson that has been learned by those who have attempted this.  The lesson is basically this: yes, you can do this, you can make it work, but the result is always kludgy and fragile. Change the sentence slightly, it doesn't work. Change the question, it doesn't work. There are thousands, tens of thousands, millions of questions: are you going to hand-write code to deal with each variation? What if instead the question is: "An apple grows on a tree, the Sun grows in the sky;  when will you be home for dinner?" Are you expecting a factual answer, or is this poetry meant to make you smile and reflect? Are you ready to write poetry software?

So there are two directions. One is to go ahead and build such a system, anyway. Among other things, it can be commercially valuable. For example, Siri, Alexa are this kind of system: hand-crafted, carefully constructed by an army of developers.  It works, and people love it. I guess it would be pretty cool to build an open-source version of these.  But it won't be me: been there, done that; I've glimpsed the issues and understand that it is not a path to AGI.

The other direction, the one I'm on, is to ask "what does it take to learn everything, from scratch?"  That's certainly the path that the DL/NN people took, and they've obtained truly remarkable results. And are sure to obtain more, although I sense a roadblock, a subtlety on that path.  Myself, I'm pursuing a variant: a statistical approach that explicitly involves symbols. At this time, it's very far from a chatbot.  But it's very promising. I really like how it's going.

> I think the vision/avatar/cloudcog/conversational dialog initiatives are all wonderful things, but I think they may be far beyond my grasp at this stage.

Sure. Well, improve your grasp!  Go and build a chatbot, try to slot the pieces together. It's all good, it's not a waste of time

I'm open to learning resource suggestions if anyone has some, I'm currently starting to read "Artificial Intelligence: A Modern Approach" by Norvig and "The Scheme Programming Language" by Dybvig.

Scheme is weird. It makes the scales fall from your eyes.  Try SICP -- "Structure and Interpretation of Computer Programs" Reading it is like, ok, ok, ok, ... wait, what?

Also, read and grok "The Lisp Curse"  http://www.winestockwebdesign.com/Essays/Lisp_Curse.html  ... and then go ahead and use it anyway. ... and collaborate!

--linas
 

Ben Goertzel

unread,
Aug 29, 2022, 11:25:20 AM8/29/22
to ope...@googlegroups.com
> I'm motivated to provide (3) but sometimes dispense 1 & 2. I keep saying "me", because Ben has pulled almost everyone from off of the projects here, and onto other projects.

A number of SingularityNET folks who were previously working on what
I've been calling "OpenCog Classic" (the version of OC Linas is
actively maintaining and developing) are now working on Hyperon, yeah,
see

https://github.com/trueagi-io/

https://wiki.opencog.org/w/Hyperon. (not wholly up to date)

or see the talks from AGI-22 Hyperon workshop at

https://www.youtube.com/watch?v=BYvOMXl8zcc

Linas, if you don't like "OpenCog Classic" as a label to use for "the
version Linas is actively maintaining/developing" it would be great if
you'd suggest another name...

Hyperon shares largely the same conceptual foundation and high-level
design as OpenCog and on my end the AI algos I want to run on Hyperon
overlap very closely w./ the ones I wanted to run on OCC ... so I do
think it's valid to call Hyperon "a species of OpenCog" as opposed to
a totally new architecture

In the big picture it's probably good to have multiple related
approaches/versions out there, exploring different regions of design
space... this has worked out OK e.g. in the operating system domain
obviously...

As you'll see if you watch the AGI-22 talks (e.g. the ones by Alexey,
Nil and Jonathan Warrell), the MeTTa (fka "Atomese 2") interpreter (a
key part of Hyperon) now works well enough that hardy souls with a
taste for weird functional languages can play with it and write
interesting code...

However, it is not yet mature enough to be recommended for practical
applications... documentation is incomplete, performance is not yet
optimized, and language features are still being tweaked/added based
on experimental usage

We anticipate that once the framework matures, most development on
Hyperon will be done via AI-algo- or application-specific DSLs written
in MeTTA ... one thing we are now working on is a DSL (written in
MeTTa) for writing MeTTA DSLs. We anticipate MeTTA DSLs then being
invoked within code in other languages (python, Haskell, Julia) much
as ML/DL frameworks are now invoked w/in scripts... (See Adam
Vandervorst's talk at AGI-22 for a high level overview on this...)

Chatbot-wise, the trajectory we're on is to

-- open-source the dialogue-system framework we've written for the
Grace eldercare robot (awakening.health), which uses OpenCog Classic
for some things along with a bunch of transformer NN models

-- replace OCC with Hyperon in the above framework ... and also
introduce an experimental usage of Hyperon for episodic memory
associated w/ dialogue

However, the above has not been done yet meaning there is no
Hyperon-related "chatbot" framework you can use at this. moment..

The 3 main use-cases we're initially looking at for Hyperon, to pursue
before the system is mature as guides for development, are
agent-control in Minecraft (portable in many ways to other virtual
worlds), dialogue-systems as noted above, and genomics (the OCC
BioAtomspace should be portable to Hyperon fairly straightforwardly)

-- Ben

Linas Vepstas

unread,
Aug 30, 2022, 8:23:02 AM8/30/22
to opencog
Hi Ben,

Thanks for writing. I have two concerns. You say "replace OCC with Hyperon".  There's an ocean of stable, debugged, performance tuned, working code. When you call it "opencog classic", it suggests that you will be providing some kind of portability path for existing applications. Yet, I suspect that there aren't any compatibility plans.

The other concern is that by calling it "classic", you are sucking all the air out of development.  There's only a finite amount of development talent. I would be much happier if you were recruiting developers from out of the ranks of Neo4J or tinkerpop or grakn.ai or some such projects, instead of cannibalizing opencog.   Enlarge the pool; don't turn it into a zero-sum competition for developer mindshare.

-- linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages