Re: [opencog-dev] Contributing to Opencog

236 Aufrufe
Direkt zur ersten ungelesenen Nachricht

Linas Vepstas

ungelesen,
25.09.2017, 23:46:4725.09.17
an opencog
Well, we need a crisp-logic reasoner attached to the language subsystem. For example, the Uni-Potsdam ASP solver hooked up to the output of relex2logic.  How easy or hard this might be depends on whether you know ASP and anything about language, and/or have studied logic.

There are other projects too, listed in the "Ideas" wiki page.

--linas

On Mon, Sep 25, 2017 at 11:05 PM, Onkar N Mahajan <on2k...@gmail.com> wrote:
I am interested to contributing to Opencog. What do I need to learn, and how soon can I be an active contributor ?

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/c87d0f11-8750-4f9f-be5f-f28fa640328b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
"The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."

Amirouche Boubekki

ungelesen,
01.10.2017, 10:38:5601.10.17
an ope...@googlegroups.com
On Mon, Sep 25, 2017 at 5:05 PM Onkar N Mahajan <on2k...@gmail.com> wrote:
I am interested to contributing to Opencog. What do I need to learn, and how soon can I be an active contributor ?

Forget about it. I've been lurking for 2 years, read dozens if not hundreds of wiki pages and papers and I still can't contribute to opencog.

From experience, if you look for guidance on how to contribute then you are not good enough.

My 2 cents.

Mark Nuzz

ungelesen,
01.10.2017, 11:18:2901.10.17
an ope...@googlegroups.com

You bring up a good point. It is quite difficult to maintain. Not only is it a very complex system design, but the implementation is also quite complex, containing a lot of legacy code, unfinished features, experiments, and whatever else. I used to thumb my nose at the fact that not enough attention seems to be focused on simplifying the implementation. But now I have a lot of respect for the project's ability to thrive for so long despite the barriers to entry.

Keep in mind though that at many software companies, it can take a number of months of full time effort before a professional engineer becomes productive, even despite efforts to reduce this time!

And this is also a research project, with the core contributors being focused on research. I don't think they would be able to focus on maintainability.

My approach to solving it, if I were to decide now, would be to first get a lot of feedback on what new and existing contributors find difficult about the project's implementation, maintainability, and ramp up time. Would also look at past posts meticulously, to find patterns.

Some recommendations that might be made (subject to approval and further analysis): Would consider gradually moving parts of the C++ into idiomatic C# (fully open source now), or even F# if functional programming environment is desired. Any C++ developer can understand C#, and almost anything you can do in C++ can be done in C#, but with less complex code and easier troubleshooting. Even if half the core developers were to prefer python (for example) I'd try to persuade everyone that C# is a better choice due to the fact that it probably has the lowest learning curve of all major languages, is syntactically and idiomatically similar to C++, and has high compatibility due to the VM runtime.

Would clean up the build scripts or even rewrite them completely (the scripts aren't bad, but they're old and probably need an overhaul). Toss out all support for alternate OS, alternate compiler, etc. At least until support can be re-added under a newer build system. There is (or was) a lot of "junk DNA" code in the build script that only hinders efforts to understand it or improve it. Ever hear of the "broken window problem"? CMake has a high learning curve (and is it really necessary when using one OS?) so I'd probably do away with it.

My vote is worth about zero though because I don't have the physical stamina or mental willpower to work on this in addition to my day job (it would not be easy work, and likely some of the hardest coding work I've ever done, which is saying something).


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Mark Nuzz

ungelesen,
01.10.2017, 11:21:0901.10.17
an ope...@googlegroups.com

By the way, I lurked for 10 years before making my first real contribution. Don't blame the project for that one!

Linas Vepstas

ungelesen,
01.10.2017, 12:30:5401.10.17
an opencog, Michael Duncan
Hi Mark,

If we could lower the barriers to entry, we would, but no one knows how to solve that.

I think maintainability is actually OK; we've ripped out almost all legacy code, although numerous ramshackle huts remain, with the vague hope that someday maybe someone will move into them and fix them up.  Maybe we should move those into directories that make it clear that they are abandoned huts.... any suggestions for a directory name?

I mean, how do I tell people "this code compiles but probably doesn't work and no one uses it" vs. "this code is the bedrock foundation that we jelously protect from damage"  cause if you look at the repos, you can't really tell which is which.  How can I tell people this, without being insulting about it (I can't say "this code is dogshit" in the README. People would get upset by that.)

There's not really problems with C++ or C# ... that's not where the action is.  It would be cool if, for example, someone wrote R interfaces for the matrix code, because the bioscience guys like R and I think the matrix code is exactly the right API for them (even though they haven't yet figured this out.  I'm talking to you Mike Duncan...)

CMake is a non-issue. 99% of all developers don't need to fuck-wid the cmake files, or at best have only trivial changes.  They work, don't change anything.

But I agree: jumping into opencog is like jumping into a corporate code base, it can take months to learn enough to be productive.

Maybe Onkar could add R interfaces for the matrix code.   Read the README here:

  https://github.com/opencog/atomspace/tree/master/opencog/matrix

This might be one of the easier/easiest tasks.

--linas




For more options, visit https://groups.google.com/d/optout.

Ivan Vodišek

ungelesen,
01.10.2017, 12:46:4401.10.17
an ope...@googlegroups.com
> If we could lower the barriers to entry, we would, but no one knows how to solve that.

Hehe, wanna bet? Soon, promise...

Ivan Vodišek

ungelesen,
01.10.2017, 14:32:5501.10.17
an ope...@googlegroups.com
Just to finish my thought, I don't like to leave things unfinished...

OpenCog is a project of a great popularity, as we can see from numerous attempts of new incomers that try to contribute the project in their free time. However, the project is so complicated that it is not easy to contribute it. From what I see, it is written in C, Python, and other programming languages, and by now it contains a lot of lines of code that are hard to dive into without help, and that kind of help is hard to even imagine to get from regular programmers without specific tools.

What I like in OpenCog is its open source orientation, but that is a double blade sword. Anyone can fork their own version of OpenCog, but the more there are extensions to the original, the harder it is to maintain the project. A project of such an interest from global web community deserves some bigger attention in its attempt to organize contributors. Maybe it would be wise to spend some decent time in integration of development tools, allowing different contributors to consolidate their mutual work.

OpenCog covers a great deal of knowledge reasoning aspects, but to really scale it up through support of many programmers, it needs carefully thought out collaboration tool, that is not currently part of the project. I agree that collaboration tool is not the main concern of OpenCog, but to scale it up, it is necessary to have some sort of coordination tool between programmers, otherwise we get very complicated system that is hard to improve and maintain.

If you agree that a sort of coordination tool is necessary for such a big project, try to answer: how would such a tool look like? Such a tool would be of interest not only to OpenCog, but to any open source project. Some might say that this proposition is not the AGI programming we all imagine when thinking of programming artificial intelligence, but do you agree that, if we want to collaborate, we need a proper tool for collaboration? Single human can do something, but not much. A few of them can do more if they are well organized. Now imagine what could do a bigger number of men force, only if they could form a productive structure. A productive structure is what makes big projects successful, and I see OpenCog as a big project.

Big projects usually succeed because of boss-employee strict relations where in the end only one person decides a drift of direction in which the project develops. But I think there could exist a system of security guided programming in which a more human-like collaboration could be achieved. Generally, I imagine a system where any programmer decides what to seed down as a part of the project, then set up some accessibility parameters like who can modify or fork specific part of the project, all exposed to the public or to specific persons of interests. Overall, any part of the project should have its own discussion thread where contributors would interchange their ideas. And last, but not least, a good unified project documenting interface should exist because other people might want to modify, or fork the project parts, and that would be impossible without the documentation.

The key property of collaboration system I'm proposing is a security system where each author can give privileges to modify or to fork out their work. This way I hope that the system could scale well towards a bigger number of maintainers, without need to boss around like in big corporations.

Are there any thoughts on this subject? Is there even an interest in such a tool?


Mark Nuzz

ungelesen,
01.10.2017, 14:33:0001.10.17
an ope...@googlegroups.com
Hi,


On Sun, Oct 1, 2017 at 9:30 AM, Linas Vepstas <linasv...@gmail.com> wrote:
> Hi Mark,
>
> If we could lower the barriers to entry, we would, but no one knows how to
> solve that.

I doubt that it's as difficult as you think. Even if nobody on the
core team has that expertise, Ben is an excellent recruiter and could
probably find someone with those skills if it was made a priority (and
there are plenty of devs with that kind of background, they don't have
to be world-class). But I'm not sure I ever recall there being much
emphasis on trying to solve it.

>
> I think maintainability is actually OK; we've ripped out almost all legacy
> code, although numerous ramshackle huts remain, with the vague hope that
> someday maybe someone will move into them and fix them up. Maybe we should
> move those into directories that make it clear that they are abandoned
> huts.... any suggestions for a directory name?
>
> I mean, how do I tell people "this code compiles but probably doesn't work
> and no one uses it" vs. "this code is the bedrock foundation that we
> jelously protect from damage" cause if you look at the repos, you can't
> really tell which is which. How can I tell people this, without being
> insulting about it (I can't say "this code is dogshit" in the README. People
> would get upset by that.)

The way I see it handled most often is to only allow
production-quality, working code in the master branch. But didn't the
project try to enforce that with CircleCI (I think) and you had an
issue with that? Not saying CircleCI is a silver bullet that will
automatically solve it, but such a policy, preferably enforced
automatically, would be a key step... If something doesn't meet a
certain threshold of quality, treat it as an extension/plugin so it
doesn't bloat the codebase. Simplicity is the most critical resource
of a project, second only to talent!! Even more important than time
and money. Not advocating dumbing down the system design at all, only
finding simpler means to implement the same thing.

>
> There's not really problems with C++ or C# ... that's not where the action
> is. It would be cool if, for example, someone wrote R interfaces for the
> matrix code, because the bioscience guys like R and I think the matrix code
> is exactly the right API for them (even though they haven't yet figured this
> out. I'm talking to you Mike Duncan...)

It might just be my anti-C++ bias talking. Even though you, Linas,
know that the action is not happening there, a lot of new people
coming in would be less likely to say "fuck this" early on if the
footprint for C++ is smaller. A larger C++ footprint with more
dependencies means more points of failure during the build step,
before the first line of code is even executed. I frequently have
trouble building more complex projects unless the build scripts are
very polished
>
> CMake is a non-issue. 99% of all developers don't need to fuck-wid the cmake
> files, or at best have only trivial changes. They work, don't change
> anything.

They work for you, but what about Samantha Atkins for example? She had
issues building, and there were no obvious error messages, or hints
describing what to do. But the main point I wanted to make with that
was that it's not a matter of all the devs having to know how CMake
works (that would be a bizarre thing indeed), but rather the build
scripts are the first thing an incoming dev needs to use just to get
started. Think of it as analogous to a gas station sign with the
prices listed. If the sign is broken at all, the business can still
operate, the refineries are still running, and the executives are
still plotting world domination in the board room. The people who go
there often will know that it's the best deal, even if the sign
doesn't work. But there will always be the little guy getting stiffed
30 cents extra by going across the street, not knowing what the
correct prices are. And there will be others who will go across the
street out of principle, because they figure that if a company can't
be bothered to fix their "welcome" sign, they're not worthy of
business. Likewise, if I browse a github repo, and if I notice too
many things early on that don't make sense, then I'll have a negative
first impression bias against that project, even subconsciously.


I realize that some of my ideas are probably not even feasible in the
best case scenario, but the specific ideas are beside the point.
Unless the project officially makes a priority to solve a problem,
nobody with those skills or background will come forward to solve it.
Even if there are people here that know how to solve it, they won't
solve it if they don't think that the project even wants it solved. So
I just checked the roadmap, and literally the first high level goal is
to make the system easy to work with. Is there a wiki page that
describes this in more detail? I think that if something is listed as
a high level goal and is on a project of this size, maybe it would be
worth considering treating it as a subproject in its own right, with a
lead developer.

(Sorry if any of my assumptions are stupidly wrong here).

Mark Nuzz

ungelesen,
01.10.2017, 14:37:4301.10.17
an ope...@googlegroups.com
On Sun, Oct 1, 2017 at 11:32 AM, Ivan Vodišek <ivan....@gmail.com> wrote:
>
> The key property of collaboration system I'm proposing is a security system
> where each author can give privileges to modify or to fork out their work.
> This way I hope that the system could scale well towards a bigger number of
> maintainers, without need to boss around like in big corporations.
>
> Are there any thoughts on this subject? Is there even an interest in such a
> tool?
>

There's already such a tool, it's called the Pull Request. You could
have as part of a CI system a way to automatically assign reviewers to
a pull request, based on the changes that are made, and where they
were made. And some bigger tech companies do this very thing.

Ivan Vodišek

ungelesen,
01.10.2017, 14:53:2001.10.17
an ope...@googlegroups.com
And is there a way to extract a documentation of the project in a way Javadoc works? That option would be of great help if it would be used.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Mark Nuzz

ungelesen,
01.10.2017, 15:02:2601.10.17
an ope...@googlegroups.com
On Sun, Oct 1, 2017 at 11:53 AM, Ivan Vodišek <ivan....@gmail.com> wrote:
> And is there a way to extract a documentation of the project in a way
> Javadoc works? That option would be of great help if it would be used.


Try building with "make doxygen"

Ivan Vodišek

ungelesen,
01.10.2017, 16:24:3201.10.17
an ope...@googlegroups.com
Then you have great powers over there.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Anastasios Tsiolakidis

ungelesen,
01.10.2017, 18:47:3401.10.17
an opencog
Well isn't OpenCog having a busy weekend :) As a lurker I have already expressed my dissatisfaction at "advanced C++" which is the trend in the project, and would probably carry over my disapproval of "idiomatic C#". There is absolutely no reason for the coding to be more difficult to comprehend that OpenCog's design itself. If anything, the code should make plain and simple what the bloody design is trying to do! Now, my particular wet dream would be to see people pulling together their own "free resources", like the free tiers at AWS, Google Cloud etc, to create a hive-mind. If somebody was brilliant enough to throw away big chunks of the code and instead achieve (some of) the same results with a DB of sorts, AWS lambda etc, that would be quite something. Then, for the parts that don't fit the "cloud" box, if someone could come up with the "CloudCog", some probabilistic graph, inference engine or whatever is missing from the garden variety PAAS and SAAS, then we could really be heading somewhere. I don't know much about the project beyond the demos, but I do believe the project is being hurt by the general unavailability of a constantly running instance that "does something", whatever that maybe, and somehow can be accessed by the public, eg through an API. Presumably this new hedge fund thing may be the closest OpenCog has come to being a 24/7 system, and Ben will probably tells us if he finds out a better way to do things with and without this codebase

AT

Mark Nuzz

ungelesen,
01.10.2017, 19:27:4601.10.17
an ope...@googlegroups.com

On Oct 1, 2017 3:47 PM, "Anastasios Tsiolakidis" <helle...@gmail.com> wrote:
>
> Well isn't OpenCog having a busy weekend :) As a lurker I have already expressed my dissatisfaction at "advanced C++" which is the trend in the project, and would probably carry over my disapproval of "idiomatic C#". There is absolutely no reason for the coding to be more difficult to comprehend that OpenCog's design itself. If anything, the code should make plain and simple what the bloody design is trying to do!

But it's not a matter of what the actual code looks like! The tooling, the compilation times, libraries, and so on, all require a higher level of expertise on the C++ side. Even though Linus knows that understanding it is not needed, the fact is that based on the above fact, C++ is by and large a language for experts and to a lesser extent academics. So if you want non-academic, non-experts (which is the vast majority of the FOSS community) then I recommend polishing the build scripts as needed, fixing all reported issues with the build  (if someone has to POST here because of a build problem, then it's likely a defect). Then make sure the newbie tutorials exist, are easy to find, updated, and that they steer people far the hell away from the C++ code :)

Linas Vepstas

ungelesen,
02.10.2017, 21:57:0202.10.17
an opencog
But Ivan, no one forks opencog; almost all extensions are placed back into the core code base.

The python developers like to use pycharm.  We do not have good tools for visualizing or working with atomspace contents... that's maybe one of the more important parts.

--linas


For more options, visit https://groups.google.com/d/optout.

Linas Vepstas

ungelesen,
02.10.2017, 22:15:3002.10.17
an opencog
On Mon, Oct 2, 2017 at 2:32 AM, Mark Nuzz <nuz...@gmail.com> wrote:
Hi,


On Sun, Oct 1, 2017 at 9:30 AM, Linas Vepstas <linasv...@gmail.com> wrote:


> I mean, how do I tell people "this code compiles but probably doesn't work
> and no one uses it" vs. "this code is the bedrock foundation that we
> jelously protect from damage" 

The way I see it handled most often is to only allow
production-quality, working code in the master branch.

Yes, but this is the opposite of what we do. I've had this argument with Ben, and he wants anyone to contribute any thing at any time, no matter what the quality is. Everyone gets write permissions always. He doesn't want me or anyone to be a gatekeeper or project manager. He's very anti-project-management.

The current split is that the atomspace is core code, and is more tightly policed, while the opencog repo is the wild-west of random parts.
 
But didn't the
project try to enforce that with CircleCI (I think) and you had an
issue with that?

CircleCI was great, but the integration with github was terrible. It made checking in code very hard.  It was an instrument of torture and pain.

Anyway, that is besides the point: Although passing unit tests is important, it is not enough to ensure high-quality code.   There's more to it than that.
>

It might just be my anti-C++ bias talking.

I don't particularly love C++. I don't hate it either. It is what it is.

But focusing on this misses the point of opencog.  Don't write C++ code!!  Not you and mostly not anyone, except for a few core maintainers.

You should think of opencog as a domain-specific language.  Actually, several domain-specific languages. We've created half a dozen of those, program in those.   For example, our newest one is "ghost". Program in ghost, that's what you should use, and not C++.

--linas

Linas Vepstas

ungelesen,
02.10.2017, 22:34:0702.10.17
an opencog
Hi Anastasios,

Yes. But first: complaining that opencog is written in C++ is like complaining about the fact that the linux kernel on your cellphone is written in C. Who cares? It does not affect 99.9999% of all cellphone users because they do not write kernel device drivers.

Think of the atomspace as being like an OS kernel.  You probably should not be writing new C++ extensions it.  Instead, you should be writing apps for it.  The apps are where the action is.

So far, we've offered maybe half-a-dozen app APIs for it, with varying degrees of success.

Having an instance on the cloud would be great, where people could spin up an instance, and log into it. I've long long wanted to do this; hell, I could just throw an old PC onto my internet connection. I don't have time to mess with this.

For cloud-cog, the only thing available would be the app API's, and maybe that would make the bitching about C++ stop...

--linas

On Mon, Oct 2, 2017 at 6:47 AM, Anastasios Tsiolakidis <helle...@gmail.com> wrote:
Well isn't OpenCog having a busy weekend :) As a lurker I have already expressed my dissatisfaction at "advanced C++" which is the trend in the project, and would probably carry over my disapproval of "idiomatic C#". There is absolutely no reason for the coding to be more difficult to comprehend that OpenCog's design itself. If anything, the code should make plain and simple what the bloody design is trying to do! Now, my particular wet dream would be to see people pulling together their own "free resources", like the free tiers at AWS, Google Cloud etc, to create a hive-mind. If somebody was brilliant enough to throw away big chunks of the code and instead achieve (some of) the same results with a DB of sorts, AWS lambda etc, that would be quite something. Then, for the parts that don't fit the "cloud" box, if someone could come up with the "CloudCog", some probabilistic graph, inference engine or whatever is missing from the garden variety PAAS and SAAS, then we could really be heading somewhere. I don't know much about the project beyond the demos, but I do believe the project is being hurt by the general unavailability of a constantly running instance that "does something", whatever that maybe, and somehow can be accessed by the public, eg through an API. Presumably this new hedge fund thing may be the closest OpenCog has come to being a 24/7 system, and Ben will probably tells us if he finds out a better way to do things with and without this codebase

AT

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.

Ivan Vodišek

ungelesen,
03.10.2017, 00:50:2003.10.17
an ope...@googlegroups.com
> But Ivan, no one forks opencog; almost all extensions are placed back into the core code base.

I'm aware of that. If someone forks the entire project, it would have been called some other name. I was referring to an imaginary system where the whole project would be a set of modules that work together, connected by well known set of interfaces. Each module could be modified or forked out in parallel with the original. It would be up to a user, which sub-forks she/he would choose to use to run the project, or to base her/his contribution on. Probably there would be a need for combination maintainers, something like persons that would choose different flavors of the project, and would propose their "deejay-combo" to the public, optimized for this or that use. Sub-fork combinations of low quality would be avoided, while really useful ones would live on. Just a bit of brainstorming in a direction of decentralization. The goal is to have industry-strength project abilities with liberal multi-user maintaining policy. It is on my long-term to-do list, but I could share my thoughts with someone who wants to implement it sooner.

Thank you all for your patience :)


Mark Nuzz

ungelesen,
03.10.2017, 05:23:1003.10.17
an ope...@googlegroups.com

Ivan,

This is essentially the vision I have for the project too. I wish I could say that it could be done by a determined volunteer, but the logistics are very difficult for pulling this off. It would require multiple experienced and skilled engineers working full-time, possibly paid. That isn't going to happen by itself.

Maybe there is a realistic path to making it happen. Let's talk in more detail later since I'm interested too, but I can't promise any commitment as its tough these days for me to put in the hours in addition to what keeps my bills paid...


Ben Goertzel

ungelesen,
03.10.2017, 05:34:0803.10.17
an opencog
Yes. My focus at the moment, frankly, is oriented toward raising the
funds required to make this happen...
>>>> an email to opencog+u...@googlegroups.com.
>>>> To post to this group, send email to ope...@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/opencog/2668e4aa-5324-4a66-9786-c795daad157c%40googlegroups.com.
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>>>
>>> --
>>> "The problem is not that artificial intelligence will get too smart and
>>> take over the world," computer scientist Pedro Domingos writes, "the problem
>>> is that it's too stupid and already has."
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to opencog+u...@googlegroups.com.
>>> To post to this group, send email to ope...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/opencog.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CAHrUA35uH4eecM_zh%3DvnNXwMtTUwEkv9qSXOGBCQjgw1kd0How%40mail.gmail.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to opencog+u...@googlegroups.com.
>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6X_KLTw1t1HaX1YK4TDPuvGNScUaN%3DVE0ncvKcQNJZufw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAMyYmr8fZTmm7Z%3D_kqLWsX3W5NcTE-Dvb6U4zEmDmp%2BSmLzPpw%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

Ivan Vodišek

ungelesen,
03.10.2017, 06:34:4203.10.17
an ope...@googlegroups.com
I'm currently developing a code base for this organizational project in Javascript. I'm not doing it directly in Javascript, but rather
I'm onto path to build a kind of functional language which would be used to base the organizational project onto. If you can get along with this terms, I'm open for collaboration. I have a lots of free time for now. If you think that a functional language is an overkill, we can still exchange some ideas, just for fun :)


>>>> To post to this group, send email to ope...@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/opencog/2668e4aa-5324-4a66-9786-c795daad157c%40googlegroups.com.
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>>>
>>> --
>>> "The problem is not that artificial intelligence will get too smart and
>>> take over the world," computer scientist Pedro Domingos writes, "the problem
>>> is that it's too stupid and already has."
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an

>>> To post to this group, send email to ope...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/opencog.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CAHrUA35uH4eecM_zh%3DvnNXwMtTUwEkv9qSXOGBCQjgw1kd0How%40mail.gmail.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6X_KLTw1t1HaX1YK4TDPuvGNScUaN%3DVE0ncvKcQNJZufw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAMyYmr8fZTmm7Z%3D_kqLWsX3W5NcTE-Dvb6U4zEmDmp%2BSmLzPpw%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Alex

ungelesen,
03.10.2017, 14:17:5503.10.17
an opencog
I am here since the fall of last year (around year) and if I am allowed, I would like to make the following thoughts that may make OpenCog project more attractable in the eyes of developers and users:

1) The first feature of OpenCog is its internal complexity. One can read two-volume AGI book and wonder about ideas about organizing mind agents and processing nodes in multiprocessor, distributed architectures, about load balancing and execution priorities, internode communication, etc. All these are pretty low level technicalities that require the expertise of system programmers, but this is quite rare expertise. There are far more business application programmers or scientific application programmers that are relying on the OS features and speciality software features (like MPICH) to write and execute their high level application code.

I have this discussion in other thread https://groups.google.com/forum/#!topic/opencog/X_eKhNErmC8 about possibility to use external software and external services more extensively in OpenCog project. So far OpenCog project is about graph database, about graph pattern matching and graph pattern mining, about rule engine - but all these technical services are separate project today. I guess that in the time of making first OpenCog lines, there were no graph databases, the resarch and tools of graph mathcing and mining was only ascent field. But today the situation is far more different - today graph databases and mathcing/mining projects are available. Maybe the development strategy should be changed - maybe one should more extensively use these projects and there is mismatch of requirements then contribute to these speciality projects back and not to try overdo them. E.g. I do not believe that it is economically feasibile to reimplement graph database. There are graph database projects, there is ThinkerPop (JDBC) like interface and there is Gremlin (SQL) like language. And can implement algorithms in the graph database-agnostic way and use all the industrial power of the best database available. Scientists do use commercial off-the-shelf computers for HPC, why not to use industrial software? And similar things we can say about use of external reasoners (linear logic, Coq, Isabelle, etc.).

I guess, that OpenCog graph database, matcher and miner features are more or less completed, so this work is not required for novice who would like to contribute to AGI with OpenCog. But the question still stands. If one starts to think about load balancing, about scalability - can we safely assume that from the technical point of view OpenCog surpasses the industrial graph databases? And what to do if our Atomspaces are growing and growing and there is need to improve this in the project? Should be move to the low level job of systems programmers which requires so different expertise? I am just afraid whether the project is going in the right direction. People would like to concentrate on their models and knowledge bases not on the techniques. 

2) Second obstacle to my adoption of OpenCog was some missed documentation. E.g. other programming systems have BNF formalization of their languages and the strict and exhaustive list of the constructions and available patterns. OpenCog has very good list of all the node and link types but sometimes I would like to have strict definitions what nodes can be used with what links. At present I am a bit afraid that I have to do some experimentation. If the language had more formal specification then it would be possible to develop and formalize this specification furher - e.g. go from the textual code to the hypergraph, from programs to the hypergraph transformations and see what we can deduce from such semantics.

3) Third and last obstacle to my adoption was remoteness of the OpenCog ideas and concepts. It was great to have OpenCog experience because it invites me to look deeper in the OO notions. I.e. OpenCog is thinking in more basic terms of extensional in intensional inheritance/association, OO/UML modelling oversimplifies things. It is good, but still - some canonical mapping from OpenCog notions to the more widely adopted knowledge modelling notions would be helpful. Even more so because I am pretty sure that there are people who have made such mappings for themselves. And similar things I can say about the probabilistic term logic that is underlying OpenCog - it is not the most popular thing in the market. Again, I am not against such approach, I just invite to present some canonical mapping to the more popular logics. I don't exactly remember but AGI books had such explanation, if I am correct. 


So - the general conclusion is: there are ideas about modularization of OpenCog. But it seems to me that everyone here expects that modules will be developed by OpenCog society. My view is different. Modularization is required but we should use already available software (be it external open source) for graph database, mathcer, miner, rule engine and grow these projects and grow ourselves with the growth of these external projects. That is the true modularization.

Well, please, don't take seriously my thoughts I just expressed my opinions. I am in quite difficult position. I need to make decision to what knowledge base to commit and I am afraid not to take the wrong decision. There are so much factors under consideration and sure, everyone has his or her opinions about the ideal project. But things are coming and going. I am working in my profession on broadcasting system and I have seen how much the TV adverstising business is changing and how its supporting software is changing too. So - why should be expect that the software for cognitive architectures remains static.

There were talks about funding. But funding for what? For developing yet another graph database? Facebook, Google exploited open source software as much it was possible during the initial growth phase and it was the base on which the success story was built. Of course, after some time they started to contribute back to the community. And they are still engaged and open with the community and it is the mutual growth and for mutual benefit.

Linas Vepstas

ungelesen,
04.10.2017, 03:25:4904.10.17
an opencog
Ivan, Mark,

The project that Ben is referring to is here: https://github.com/opencog/singnet -- it will allow a number of different AI agents to communicate with one-another and exchange information.

Now is a good time to alter the course of events; that project is getting a lot of effort at this particular instant.

--linas


>>>> To post to this group, send email to ope...@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/opencog/2668e4aa-5324-4a66-9786-c795daad157c%40googlegroups.com.
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>>>
>>> --
>>> "The problem is not that artificial intelligence will get too smart and
>>> take over the world," computer scientist Pedro Domingos writes, "the problem
>>> is that it's too stupid and already has."
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an

>>> To post to this group, send email to ope...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/opencog.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CAHrUA35uH4eecM_zh%3DvnNXwMtTUwEkv9qSXOGBCQjgw1kd0How%40mail.gmail.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CAB5%3Dj6X_KLTw1t1HaX1YK4TDPuvGNScUaN%3DVE0ncvKcQNJZufw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAMyYmr8fZTmm7Z%3D_kqLWsX3W5NcTE-Dvb6U4zEmDmp%2BSmLzPpw%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.

Linas Vepstas

ungelesen,
04.10.2017, 04:02:4204.10.17
an opencog
hi Alex ... lots of small inline replies below.

On Wed, Oct 4, 2017 at 2:17 AM, Alex <alexand...@gmail.com> wrote:
I am here since the fall of last year (around year) and if I am allowed, I would like to make the following thoughts that may make OpenCog project more attractable in the eyes of developers and users:

1) The first feature of OpenCog is its internal complexity. One can read two-volume AGI book and wonder about ideas about organizing mind agents and processing nodes in multiprocessor, distributed architectures, about load balancing and execution priorities, internode communication, etc. All these are pretty low level technicalities that require the expertise of system programmers, but this is quite rare expertise.
 
You can use opencog without knowing anything at all about the above topics.  If they are boring to you, just ignore them.  If they are interesting to you, then perhaps you could be a low-level infrastructure developer for opencog.  We need low-level people, but its not for everyone.
 


I have this discussion in other thread https://groups.google.com/forum/#!topic/opencog/X_eKhNErmC8 about possibility to use external software and external services more extensively in OpenCog project. So far OpenCog project is about graph database, about graph pattern matching and graph pattern mining, about rule engine - but all these technical services are separate project today. I guess that in the time of making first OpenCog lines, there were no graph databases, the resarch and tools of graph mathcing and mining was only ascent field. But today the situation is far more different - today graph databases and mathcing/mining projects are available. Maybe the development strategy should be changed - maybe one should more extensively use these projects and there is mismatch of requirements then contribute to these speciality projects back and not to try overdo them.

Opencog is far more advanced than any of these other projects.  I wish the people who created these other projects had worked on opencog instead. Oh well.
 
E.g. I do not believe that it is economically feasibile to reimplement graph database. There are graph database projects, there is ThinkerPop (JDBC) like interface and there is Gremlin (SQL) like language.

The opencog query language is far more advanced than tinkerpop.  It is unfortunate that the tinkerpop folks decided to invent something new, instead of using what we already had.  Again -- this is about history, politics, and not about technology.
 
And can implement algorithms in the graph database-agnostic way and use all the industrial power of the best database available. Scientists do use commercial off-the-shelf computers for HPC, why not to use industrial software? And similar things we can say about use of external reasoners (linear logic, Coq, Isabelle, etc.).

If you can attach coq to tinkerpop and make it work ... sure. But you would probably have to completely rewrite both coq and gremlin in order to do this.  And that is a huge amount of work.
 

I guess, that OpenCog graph database, matcher and miner features are more or less completed, so this work is not required for novice who would like to contribute to AGI with OpenCog.

That's exactly backwards. These were created to make it easier for the novice to use opencog.
 
But the question still stands. If one starts to think about load balancing, about scalability - can we safely assume that from the technical point of view OpenCog surpasses the industrial graph databases?

No, because we have exactly zero people working on load balancing and scalability.
 
And what to do if our Atomspaces are growing and growing and there is need to improve this in the project?
 
Its been like that for over 10 years now, yet here we are...

Should be move to the low level job of systems programmers which requires so different expertise?

OpenCog has needed systems programmers since the very beginning.  However, systems programmers are very rare, as you point out, and they are fully employed.
 
I am just afraid whether the project is going in the right direction. People would like to concentrate on their models and knowledge bases not on the techniques. 

You can use opencog today, without having to worry about systems programming issues.  Why are you worried about them?


2) Second obstacle to my adoption of OpenCog was some missed documentation. E.g. other programming systems have BNF formalization of their languages and the strict and exhaustive list of the constructions and available patterns. OpenCog has very good list of all the node and link types but sometimes I would like to have strict definitions what nodes can be used with what links. At present I am a bit afraid that I have to do some experimentation.

You should think of opencog atomese as being like python-0.6 or perl-0.8 -- its not yet at version 1.0, and we are creating, modifying and changing it all the time. 

3) Third and last obstacle to my adoption was remoteness of the OpenCog ideas and concepts.
 
Ten years ago, if you said "graph database", people would have told you either that you are insane, don't know what you are talking about, or that you are an unappreciated genius who should do something useful with your life.  The concept of a graph database was very remote and strange.  Now, it seems like everyone and their kid brother knows what that is.
 
It was great to have OpenCog experience because it invites me to look deeper in the OO notions.

OO like "object oriented"?  You should read lambda-the-ultimate to learn what object oriented programming is. It is safe to say that 98% of all C++ and Java programmers have absolutely no clue at all what "object oriented means".  They're just ... programmers. They don't need to know, because you can be a good java or C++ programmer without knowing anything about OO programming.
 
I.e. OpenCog is thinking in more basic terms of extensional in intensional inheritance/association, OO/UML modelling oversimplifies things. It is good, but still - some canonical mapping from OpenCog notions to the more widely adopted knowledge modelling notions would be helpful.

lambda the ultimate.
 
Even more so because I am pretty sure that there are people who have made such mappings for themselves.
 
lambda the ultimate.
 
And similar things I can say about the probabilistic term logic that is underlying OpenCog - it is not the most popular thing in the market.

Ten years ago, graph databases were not the most popular thing on the market. 

Ten years ago, one of the programmers who worked on opencog at the time, Joel Pitt, created a startup, and tried to sell a graph database, but was unable to convince anyone that this was useful for any practical, commercial purpose. Its possible that he was a bad salesman. Its possible that the concept of a graph database was just much too early, and the world was not ready for it.  Its possible that both were true. At any rate, the startup failed.

We don't have a time machine. We can't fix these issues. We can only go forwards.

--linas
 
Again, I am not against such approach, I just invite to present some canonical mapping to the more popular logics. I don't exactly remember but AGI books had such explanation, if I am correct. 


So - the general conclusion is: there are ideas about modularization of OpenCog. But it seems to me that everyone here expects that modules will be developed by OpenCog society. My view is different. Modularization is required but we should use already available software (be it external open source) for graph database, mathcer, miner, rule engine and grow these projects and grow ourselves with the growth of these external projects. That is the true modularization.

Well, please, don't take seriously my thoughts I just expressed my opinions. I am in quite difficult position. I need to make decision to what knowledge base to commit and I am afraid not to take the wrong decision. There are so much factors under consideration and sure, everyone has his or her opinions about the ideal project. But things are coming and going. I am working in my profession on broadcasting system and I have seen how much the TV adverstising business is changing and how its supporting software is changing too. So - why should be expect that the software for cognitive architectures remains static.

There were talks about funding. But funding for what? For developing yet another graph database? Facebook, Google exploited open source software as much it was possible during the initial growth phase and it was the base on which the success story was built. Of course, after some time they started to contribute back to the community. And they are still engaged and open with the community and it is the mutual growth and for mutual benefit.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.

Nil Geisweiller

ungelesen,
04.10.2017, 05:45:3804.10.17
an ope...@googlegroups.com
On 10/04/2017 11:02 AM, Linas Vepstas wrote> And can implement
algorithms in the graph database-agnostic way and
> use all the industrial power of the best database available.
> Scientists do use commercial off-the-shelf computers for HPC, why
> not to use industrial software? And similar things we can say about
> use of external reasoners (linear logic, Coq, Isabelle, etc.).
>
>
> If you can attach coq to tinkerpop and make it work ... sure. But you
> would probably have to completely rewrite both coq and gremlin in order
> to do this. And that is a huge amount of work.

I never tried Coq or Isabelle, but the provers I've tried (E and
Vampire) were using resolution
https://en.wikipedia.org/wiki/Resolution_(logic), which doesn't work for
a para-consistent logic like PLN, at least not out-of-the-box. On top of
that PLN is probabilistic (or even meta-probabilistic we could say).
These make it difficult or at best unnatural to use traditional
automatic theorem provers. Maybe there's an easy way, or a more general
framework that I missed, but that was my impression when I studied the
domain.

Nil

Nil Geisweiller

ungelesen,
04.10.2017, 06:33:3904.10.17
an ope...@googlegroups.com, Linas Vepstas
The AtomSpace project should probably be promoted on its own, have its
own webpage, purpose, reference manual, tutorial, etc.

Also what is missing to get more main stream is a way to define atom
types within atomese itself, so it could being used as a more neutral
graph db. That's really the only missing piece since TV and AV have been
replaced by generic key x value.

Nil

On 10/04/2017 11:02 AM, Linas Vepstas wrote:
> hi Alex ... lots of small inline replies below.
>
> On Wed, Oct 4, 2017 at 2:17 AM, Alex <alexand...@gmail.com
> <mailto:alexand...@gmail.com>> wrote:
>
> I am here since the fall of last year (around year) and if I am
> allowed, I would like to make the following thoughts that may make
> OpenCog project more attractable in the eyes of developers and users:
>
> 1) The first feature of OpenCog is its internal complexity. One can
> read two-volume AGI book and wonder about ideas about organizing
> mind agents and processing nodes in multiprocessor, distributed
> architectures, about load balancing and execution priorities,
> internode communication, etc. All these are pretty low level
> technicalities that require the expertise of system programmers, but
> this is quite rare expertise.
>
> You can use opencog without knowing anything at all about the above
> topics. If they are boring to you, just ignore them. If they are
> interesting to you, then perhaps you could be a low-level infrastructure
> developer for opencog. We need low-level people, but its not for everyone.
>
>
>
> I have this discussion in other thread
> https://groups.google.com/forum/#!topic/opencog/X_eKhNErmC8
> <https://groups.google.com/forum/#%21topic/opencog/X_eKhNErmC8>
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> <https://groups.google.com/group/opencog>.
> <https://groups.google.com/d/msgid/opencog/f52988a4-7430-4d56-a67d-ec9087da83ce%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
>
>
> --
> /"The problem is not that artificial intelligence will get too smart and
> take over the world," computer scientist Pedro Domingos writes, "the
> problem is that it's too stupid and already has." /
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA36SW7oAbkvmBAU8J6iLvvby-EL2gSKsNZO6TszRnwGzMw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA36SW7oAbkvmBAU8J6iLvvby-EL2gSKsNZO6TszRnwGzMw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ben Goertzel

ungelesen,
05.10.2017, 05:13:5505.10.17
an opencog
> I guess that in the
>> time of making first OpenCog lines, there were no graph databases, the
>> resarch and tools of graph mathcing and mining was only ascent field. But
>> today the situation is far more different - today graph databases and
>> mathcing/mining projects are available. Maybe the development strategy
>> should be changed - maybe one should more extensively use these projects and
>> there is mismatch of requirements then contribute to these speciality
>> projects back and not to try overdo them.

yeah, this makes sense until you look at the details...

One option we have looked at but not fully explored is Apache Ignite,
which is a sort of middleware layer between applications and storage.
It could possibly make sense to use Ignite as the basis for a
distributed OpenCog system -- retaining the current Atomspace as a
"local cache" and using Ignite to handle persistent storage and to
contain policies for interaction between local-cache Atomspaces on
different machines

However, this is easy to say and the hidden rocks may only become
apparent after actually trying it. If someone feels like prototyping
this, that would be quite interesting.

I can say with confidence there is no other pattern-matching or
pattern-mining framework that does what OpenCog's does, and extending
Neo4J Cypher or whatever is not a viable route to reimplementing what
OpenCog's tools do...

It would require a lot less effort to create nice documentation and
interfaces for OpenCog, than to reimplement OpenCog functions atop
some fashionable platform that is inappropriate for the purpose...

I regret that OpenCog remains so hard to approach. In large part it
has evolved this way because the vast bulk of funding that has gone
into OpenCog has been oriented toward paying a small group of people
to work, in a hurry, on making OpenCog do something specific.... We
have not yet had a big chunk of funding dedicated to making it easy to
use as a platform. Hopefully that will change soon.

-- Ben G

Mark Nuzz

ungelesen,
05.10.2017, 14:57:2605.10.17
an ope...@googlegroups.com
On Thu, Oct 5, 2017 at 2:13 AM, Ben Goertzel <b...@goertzel.org> wrote:

> I regret that OpenCog remains so hard to approach. In large part it
> has evolved this way because the vast bulk of funding that has gone
> into OpenCog has been oriented toward paying a small group of people
> to work, in a hurry, on making OpenCog do something specific.... We
> have not yet had a big chunk of funding dedicated to making it easy to
> use as a platform. Hopefully that will change soon.

This seems to be a very common theme with projects, especially with
limited resources. Though OpenCog is unique in the sense that it has
survived for so long with so many contributors, so the scale/extent at
which this happened is somewhat larger and therefore require greater
effort and coordination to really solve.

I'm curious about a few things...

1) I know you implied this but I wanted to make sure: Do you see the
goal of an easy-to-use opencog architecture as a high priority item?

2) Do you think that the specific architecture direction
(modularization) presented by Ivan is generally the way that this
should be solved?

3) Has there been any concrete work in mapping out a specific
architectural direction to fulfill the goal of making opencog easy to
use?

4) Are these decisions that have already been formally agreed upon by
the governance of the project? Are there any dissenters among the core
developers, to the extent that it might interfere with such plans if
executed?


I am not quite aware of all the details but I have been trying to keep
up with all of the discussions lately in this group. Please forgive me
if I am being too pedantic... My impressions are that funding would be
easier to come by after these items are figured out in great detail
and then incorporated as part of a proposal. Such a proposal could
attract enough of the right unpaid volunteers too, as you know.


But yeah, I am not claiming by any means to know even remotely close
to what Ben knows on this subject. But from my vantage point, I am of
the opinion that the monolithic architecture is what's slowing
progress, and not the lack of funding. Suppose you get the funds and
then you hire the wrong people, then you're even worse off than before
because you probably wouldn't get another shot at funding for awhile.
If it were up to me I would have at least one existing core developer
be involved with this effort full-time, preferably whomever has the
most knowledge in modular software architectures.

Ben Goertzel

ungelesen,
05.10.2017, 15:10:2705.10.17
an opencog
"Modular" and "monolithic" are very general terms. Could you
articulate more precisely the ways in which you think OpenCog is
"monolithic", and in which you think it could be made more "modular"?

My thinking is: the Atomspace is a distinct module (in its own repo,
it builds separately, etc.), and the various AI processes that can be
used with the Atomspace are also independently buildable and runnable
(MOSES, PLN, the NLP pipeline, ECAN, etc.). Also when we use OpenCog
for robot control, it communicates with other AI tools that are
wrapped up in separate ROS nodes. This already seems pretty modular
to me. So I am wondering what other kind of modularity you are
looking for?

Regarding Ivan's description

***
I was referring to an imaginary system where the whole project would
be a set of modules that work together, connected by well known set of
interfaces. Each module could be modified or forked out in parallel
with the original. It would be up to a user, which sub-forks she/he
would choose to use to run the project, or to base her/his
contribution on. Probably there would be a need for combination
maintainers, something like persons that would choose different
flavors of the project, and would propose their "deejay-combo" to the
public, optimized for this or that use. Sub-fork combinations of low
quality would be avoided, while really useful ones would live on.
***

I guess one relevant point is that the different AI tools within
OpenCog can interact in many many different ways. E.g. there is no
single, simple interface for interaction between PLN and MOSES; there
are lots of ways they can interface, conceptually speaking. And
figuring out the best ways for them to interface is a current research
topic...

In building a particular OpenCog application, one can define specific
interfaces between the various AI components... But for OpenCog as a
general platform, the interactions between the components have to
remain flexible because there are so many interesting ways to do it...

I think the biggest issue with OpenCog is that we need better
tutorials and documentation. I guess if we had that people would be
able to understand the system better and then would also make more
useful suggestions regarding improving the architecture...

ben





-- Ben
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAMyYmr-T4gevcMh_2mYHko-YwuRcCK6dyBfGZVwYT%2BuizjH6PQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--

Mark Nuzz

ungelesen,
05.10.2017, 15:49:4305.10.17
an ope...@googlegroups.com
On Thu, Oct 5, 2017 at 12:10 PM, Ben Goertzel <b...@goertzel.org> wrote:
> "Modular" and "monolithic" are very general terms. Could you
> articulate more precisely the ways in which you think OpenCog is
> "monolithic", and in which you think it could be made more "modular"?
>
> My thinking is: the Atomspace is a distinct module (in its own repo,
> it builds separately, etc.), and the various AI processes that can be
> used with the Atomspace are also independently buildable and runnable
> (MOSES, PLN, the NLP pipeline, ECAN, etc.). Also when we use OpenCog
> for robot control, it communicates with other AI tools that are
> wrapped up in separate ROS nodes. This already seems pretty modular
> to me. So I am wondering what other kind of modularity you are
> looking for?

That's good news. My views are that this architecture needs to be
developed further in order to reap the full benefits of it toward the
project's goals.
First and foremost, why are the independent processes part of the main
opencog repo and not managed as separate projects with their own
versioning and dependency toolchain? Do we have a fluent understanding
of the pros and cons of doing it this way versus separate
repositories? My guess is no, not because the team isn't smart, but
because the resources seem to be focused on the specific modules
rather than the architecture as a whole. But I've learned not to ever
underestimate the expertise of the team so this is just a wild guess.

By dependency toolchain, I'm only aware of robust tooling for systems
written in a single language or framework. Examples are npm, nuget,
rubygems, paket. This reddit thread suggests that C++-based package
management is a hard problem, and I'm inclined to agree.
https://www.reddit.com/r/cpp/comments/3d1vjq/is_there_a_c_package_manager_if_not_how_do_you/
However, it may still be worth trying.

The vision that I have with this -- and this is the key point -- is
that by keeping the modules as separate projects, the perceived
complexity of the system is greatly reduced. The intimidation factor
is reduced (for anyone new). The build tools can be simplified. Each
project can have a concrete release build with a semantic version.
Docker images could be provided, with pre-built binaries, for each
major release. Each separate project could be developed independently
with the assumption that your only required dependencies are those
which are the latest major releases, and therefore if someone has
trouble building, for whatever reason, then they can resort to using a
docker image. Anyone who wants to work on a subproject can hit the
ground running as there will be more concrete releases. Version 1.15
of a given project will always be version 1.15. The way I see it now,
is that we have a one-dimensional master branch for a large chunk of
modules that exist in the opencog repo, though I see there are some
efforts to do what I'm suggesting... so we might have the same vision
but with someone varying ideas on how to execute it.



>
> Regarding Ivan's description
>
> ***
> I was referring to an imaginary system where the whole project would
> be a set of modules that work together, connected by well known set of
> interfaces. Each module could be modified or forked out in parallel
> with the original. It would be up to a user, which sub-forks she/he
> would choose to use to run the project, or to base her/his
> contribution on. Probably there would be a need for combination
> maintainers, something like persons that would choose different
> flavors of the project, and would propose their "deejay-combo" to the
> public, optimized for this or that use. Sub-fork combinations of low
> quality would be avoided, while really useful ones would live on.
> ***
>
> I guess one relevant point is that the different AI tools within
> OpenCog can interact in many many different ways. E.g. there is no
> single, simple interface for interaction between PLN and MOSES; there
> are lots of ways they can interface, conceptually speaking. And
> figuring out the best ways for them to interface is a current research
> topic...

This is definitely relevant in the sense that it is easier for a
person to learn or understand this concept if looking at the project
in isolation. I.E. If the projects are separated, they are going to be
giving full attention to a project while reading it. If the projects
are merged together, there is a higher likelihood that one will skim
the documentation instead, and get a poorly understood concept of it,
possibly making mistakes later or failing to return to the
documentation a second time due to the perceived time sink (the truly
determined AI developer won't stumble here, but they are the exception
and not the rule). My crazy philosophy is that a new dev may start off
undetermined, and then if they are not overwhelmed early on, they will
eventually see first hand how awesome the project is, and then
everything will click, and psychologically speaking, the currents will
move with them and not against them while learning the rest.

>
> In building a particular OpenCog application, one can define specific
> interfaces between the various AI components... But for OpenCog as a
> general platform, the interactions between the components have to
> remain flexible because there are so many interesting ways to do it...
>

Package management and complete separation of modules into separate
repositories seems like the best way to do this, but if you disagree I
really want to understand why, as it could mean one of us has a deeply
flawed understanding of things and it could make for an important
revelation...

> I think the biggest issue with OpenCog is that we need better
> tutorials and documentation. I guess if we had that people would be
> able to understand the system better and then would also make more
> useful suggestions regarding improving the architecture...
>

Agree - Though this is also much easier to do under the conditions I
advocate. I'm trying to be careful not to make many rash assumptions
when making these suggestions, as I know that my understanding is not
nearly as high as it should be. The scope of what I am suggesting is
very narrow and specific to an aspect of the OpenCog architecture that
I am presumably aware of, and also is an aspect of software
architecture and psychology that I assume I have a strong
understanding with.

Ivan Vodišek

ungelesen,
05.10.2017, 16:38:1505.10.17
an ope...@googlegroups.com
Mark exactly hit the point I was trying to make. Imho, If I was joining an open source project, I would like to see availability for programmers to push any changes to any sub-area of the project without special permission of a higher force. My thoughts are striving towards decentralization here. But to make an order in such potentially chaotic conglomerate, "distribution(s) maintainers" would compose and offer what they think it is the best for particular use. Programmers would compete by code quality to be included in future parallel distributions, just like it is the case with Linux and its distributions. This idea sounds a bit progressive on the first look, but it might work. It takes some to implement it, but maybe some source control solutions that Mark proposed would be ready yet now, without a much an effort.

Just sharing some thoughts, someone might find it useful.


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

ungelesen,
05.10.2017, 19:10:2005.10.17
an opencog
it would be nice to have a fast crisp prover so that the system could jump to conclusions, and pln more slowly in the background.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

ungelesen,
05.10.2017, 19:22:3305.10.17
an opencog
People seem not to read the tutorials... maybe because they don't see the point of doing so?

> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAMyYmr-T4gevcMh_2mYHko-YwuRcCK6dyBfGZVwYT%2BuizjH6PQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Mark Nuzz

ungelesen,
05.10.2017, 19:36:2905.10.17
an ope...@googlegroups.com
On Thu, Oct 5, 2017 at 4:22 PM, Linas Vepstas <linasv...@gmail.com> wrote:
> People seem not to read the tutorials... maybe because they don't see the
> point of doing so?
>

Do you think my theory is plausible? Tutorials on a large system must
be greater in scope, and are therefore more likely to be skimmed
(which leads to a failure in comprehension).

OTOH, If modules or projects were usable in isolation, and the
dependencies could be effectively treated as black boxes (as most
software dependencies are), or even simulated/mocked, and if
meaningful experimentation and feedback still able to be done within
the narrow scope of that one module, then maybe the tutorials won't be
so pointless.

Nil Geisweiller

ungelesen,
05.10.2017, 23:30:0305.10.17
an ope...@googlegroups.com, Linas Vepstas


On 10/06/2017 02:10 AM, Linas Vepstas wrote:
> it would be nice to have a fast crisp prover so that the system could
> jump to conclusions, and pln more slowly in the background.

Yes, even for our rule engine alone there is a benefit to that. On top
of being faster to evaluate, crisp rules tend to have less premises than
their probabilistic counterparts.

Then the question is how to set the TV of these conclusions. If the
axioms are crisps with (stv 1 1) or (stv 0 1), then the conclusions
would be (stv 1 1) or (stv 0 1). But if the axioms are non-crisp, then I
guess the crisp rules could set (stv 1 Epsilon) or (stv 0 Epsilon), just
to express that something is possibly true or false. Or else we can
create a new TV type for it.

Nil

>
> On Oct 4, 2017 5:45 PM, "'Nil Geisweiller' via opencog"
> <ope...@googlegroups.com <mailto:ope...@googlegroups.com>> wrote:
>
> On 10/04/2017 11:02 AM, Linas Vepstas wrote> And can implement
> algorithms in the graph database-agnostic way and
>
> use all the industrial power of the best database available.
> Scientists do use commercial off-the-shelf computers for
> HPC, why
> not to use industrial software? And similar things we can
> say about
> use of external reasoners (linear logic, Coq, Isabelle, etc.).
>
>
> If you can attach coq to tinkerpop and make it work ... sure.
> But you would probably have to completely rewrite both coq and
> gremlin in order to do this. And that is a huge amount of work.
>
>
> I never tried Coq or Isabelle, but the provers I've tried (E and
> Vampire) were using resolution
> https://en.wikipedia.org/wiki/Resolution_(logic)
> <https://en.wikipedia.org/wiki/Resolution_(logic)>, which doesn't
> work for a para-consistent logic like PLN, at least not
> out-of-the-box. On top of that PLN is probabilistic (or even
> meta-probabilistic we could say). These make it difficult or at best
> unnatural to use traditional automatic theorem provers. Maybe
> there's an easy way, or a more general framework that I missed, but
> that was my impression when I studied the domain.
>
> Nil
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> <https://groups.google.com/group/opencog>.
> <https://groups.google.com/d/msgid/opencog/a261e2b5-7d6e-74f9-fe87-cd83304adb2a%40gmail.com>.
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA343%2BH5CT5zGobgT5hq9sy_iLPSGJUh52WcsgHUPUZNqcw%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA343%2BH5CT5zGobgT5hq9sy_iLPSGJUh52WcsgHUPUZNqcw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Ben Goertzel

ungelesen,
06.10.2017, 04:37:4406.10.17
an opencog
***
OTOH, If modules or projects were usable in isolation, and the
dependencies could be effectively treated as black boxes (as most
software dependencies are), or even simulated/mocked, and if
meaningful experimentation and feedback still able to be done within
the narrow scope of that one module, then maybe the tutorials won't be
so pointless.
***

yes, I agree that narrow AI is easier than proto-AGI ... and that if
we put more effort into wrapping up OpenCog components so they could
be used as pure narrow-AI component in themselves, or for specific
narrow applications, then this would attract more developers toward
these narrow-AI applications and tools... and work on these narrow
applications and tools would indirectly benefit the quest for
OpenCog-based AGI...

Nevertheless I feel that the bottleneck is not currently wisdom about
"what would be nice to do" or "what should be done" but rather the
lack of any resources earmarked for making these types of
improvements.... It's not like OpenCog Foundation has a large staff
and budget that is being frittered away on other stuff... Nearly all
current OpenCog dev is happening because of commercial projects
wanting to use OpenCog bits and pieces and aspects for various
specific purposes, and this sort of funded dev is great but doesn't
tend to lead to work focused on making the infrastructure easy for
newbies...

Tensorflow has wonderful documentation, beautiful visualization,
elegant modularization, etc. It's lovely for what it is. How much do
you think Google spent on these aspects?

ben
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAMyYmr_W%3D%2Bfrxge4iEHhroZq7N2zFjBRU0AnNZsu%3D0F%2B%3Dvcuog%40mail.gmail.com.

Mark Nuzz

ungelesen,
06.10.2017, 15:30:5306.10.17
an ope...@googlegroups.com, ivan....@gmail.com
Perhaps this is close to where the real disagreement is. On one hand,
the project is striving for modularity in the form of separate
compilation of modules and components. On the other hand, the idea of
having them as architecturally separate components elicits a belief
that it will be treated as narrow AI, and I know how you (rightfully)
feel about narrow AI. There is probably truth in both of our
arguments, but the difference is that I feel that the architecture I
advocate will lead to AGI much faster than otherwise. And I don't
think that this will necessarily shift the focus to the domain of
narrow AI. Rather, it will still be AGI development, but with the
variability in statefulness being greatly reduced, to the point where
a developer will not have to spend time studying the other moving
parts just to get to the point where they can feel comfortable working
on a specific component. This does not necessarily make it narrow AI
by default.

I also believe that it's easier to get resources earmarked for
improvements, when a more effective pitch is made for those
improvements. If there's no wisdom about what should be done, then
there's no proposal/plan that makes sense. And if there's no proposal,
there's no effective pitch. No effective pitch means no money. When
you say that the bottleneck is not A, but rather B; but I say that A
is a crucial step to remedy B, then perhaps this could be an idea for
some determined volunteers to try pursuing. It may not be something
easy or possible for the core crew to do, given the tied up resources,
but it might be possible for a small number of volunteers to
accomplish, given enough motivation.

Most of the money that gets spent by companies like Google on those
top notch architectural aspects, goes toward the leap from being in
the top 10%, to the top 1% of capabilities, or from the top 2% to the
top 0.1%. Talent acquisition involves a bidding war with other
companies, and the economics of it obviously aren't directly
applicable to libre projects... As you might know, Norvig's team
applies narrow AI techniques to the data around hiring and talent
markets, all in the pursuit of having the best talent is concentrated
in that one company. There are plenty of examples of freeware and
libre software with zero budgets, that have elegant architectures.
Inventing them for the first time may be costly, but re-using
paradigms that are already battle-tested and proven, is not
necessarily costly. The catch here is that most programmers don't know
how to do this, as it's not a domain that exists on most projects.
However, it is a domain that I have a personal interest in, so I would
like to help out if anyone else is able to participate...

I think that it would be a better idea at this point if Ivan and I
(and whomever else is interested) could come up with a more specific
proposal on an architectural design based on this discussion, while
also addressing all of your concerns brought up here. I can't promise
we would be successful (you know me well enough to know how terrible I
am at following through with ideas), but I think a more detailed
proposal would be more productive than debating on a mailing list.
Since a lot of the modularization is already there, as you mentioned,
it might not be too much work at all for one or two people (but
there's no way to know, and no promises). I doubt it will be easy for
a developer who isn't intimately familiar with the system though, but
certainly possible...

If I make a proposal it won't be something that will break the bank,
and would hope to provide plausible estimates for how it would create
a return on whatever modest investment of time that it would take.
Specific flaws in the proposal can be identified by anyone and then
fixed or addressed, and if at the end of the day it is rejected due to
purely philosophical differences and not due to hard technical errors,
then that makes it a candidate for some volunteers to work on it as a
side project (not a fork) and let the results speak for themselves. It
would probably be something like an alternate build system rather than
a restructuring of the codebase, and treated as a proof of concept
rather than as a replacement for anything. If people get stuck or
intimidated on the project as is but are able to find it easier to get
involved, with the alternate system, then I would consider it a proven
success. And *then* a better pitch could be made to get those
resources needed for a more involved architecture change. This would
happen over a long period of time, of course...

Will definitely need help on this though, so if Ivan and whomever else
is interested can message me in private perhaps we can get started on
brainstorming.

Ben Goertzel

ungelesen,
07.10.2017, 07:40:4007.10.17
an opencog, ivan....@gmail.com
Nuzz, etc.

Just to add a different but related dimension to the discussion -- the
way we are going to wrap OpenCog functionality in SingularityNET
agents is a bit different…

Each agent will carry out a specific functionality, which may be
generic or may be domain-specific

For instance

1) An inference agent may take in a set of premises (perhaps
probabilistically or fuzzily weighted) and a time limit and a range of
effort that’s OK to spend, and output a set of conclusions

2) A question-answering service may take an English-language question
as input, and a time limit and a range of effort that’s OK to spend,
and output a list of answers (which might be English language
sentences and/or URLs to relevant resources)

So the SingularityNET agents will be highly “modularized”, each one
carrying out a well-defined set of tasks (with certain input and
output types), where there is a set of ontologies and each task is
characterized by a set of terms in one or more ontologies...

Behind the scenes, however, to achieve this kind of modularization at
the “user” level (where the user could be after a particular abstract
AI functionality like logical inference, or a particular application
functionality like question answering), does not require a strict
modularity at the underlying OpenCog dynamics level. For instance it
does not require that the interface between inference and language
processing or evolutionary learning be restricted to a simplistic
interface that hampers the flexibility of such interactions.

In short what we will do in this context is to provide a simplified
way to access (general-purpose or application-specific) *services*
implemented using OpenCog (along with many services implemented using
other tools besides OpenCog — SingularityNET will not be restricted to
OpenCog agents by any means)…. But this does not require a
commensurate simplification of the world experienced by OpenCog AI
developers (i.e. those who are contributing to rather than utilizing
OpenCog AI).

For OpenCog AI development, I think we need better debugging tools,
better visualization tools, better documentation, better tutorials,
etc. But I am unconvinced that we can have a stricter modularization
without destroying the AGI potential of the platform. There is a bit
of a challenge here in that the crux of AGI is the interaction and
interpenetration of different functionalities. This does not rule out
modularity in the sense of software dependency management — one can
make the building of different AI functionalities more separate than
is currently the case. But it does perhaps rule out making simplified
interfaces that constrain the flexibility of interaction between
different AI modules. The different Ai modules need to interact via
their common activity updating Atomspace in real-time, and this has
just got to be a complex variety of interactions, with new aspects to
be ongoingly explored as the AGI R&D proceeds.

— Ben
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAMyYmr8UnF1vr7BwF%2BY53xdyihdLbCEL72mJ9DdkOVAFng4Zdw%40mail.gmail.com.

Linas Vepstas

ungelesen,
07.10.2017, 09:39:5907.10.17
an Nil Geisweiller, opencog
Hi Nil,

On Fri, Oct 6, 2017 at 6:30 AM, Nil Geisweiller <ngei...@googlemail.com> wrote:


On 10/06/2017 02:10 AM, Linas Vepstas wrote:
it would be nice to have a fast crisp prover so that the system could jump to conclusions, and pln more slowly in the background.

Yes, even for our rule engine alone there is a benefit to that. On top of being faster to evaluate, crisp rules tend to have less premises than their probabilistic counterparts.

Then the question is how to set the TV of these conclusions. If the axioms are crisps with (stv 1 1) or (stv 0 1), then the conclusions would be (stv 1 1) or (stv 0 1). But if the axioms are non-crisp, then I guess the crisp rules could set (stv 1 Epsilon) or (stv 0 Epsilon), just to express that something is possibly true or false. Or else we can create a new TV type for it.

You don't need a new TV type, You can just store it in parallel, as just another value on an atom.  Recall that the current TV is stored by saying

atom->setValue (PredicateNode("*-TruthValueKey-*"), some_tv);

you could just store

 atom->setValue (PredicateNode("*-CrispTruthKey-*"), crisp_tv);

and look it up that way, if/when you need it, for example to provide a "backbone" around which fuzzy explorations can be done.

So here's a completely different but related idea:  First, use a crisp reasoner to deduce what happens whenever strength>0.9999.  Next, do it again, but now for strength>0.8.  (but still using the crisp reasoner: just take strength>0.8 to mean "true"). This should have a "broader" set of consequences.  Do it again for strength>0.6 - this causes even more possibilities to be explored.  

It seems like these three cases can be treated as "lower bounds" of what we might expect PLN to find.   That these could be used to guide/limit what PLN explores.

Alternately, if this was fast enough, you could do this 100 times for 100 different truth cutoffs, and build up a distributional TV...

I find this idea exciting!  It seems plausible, doable ...

--linas



Nil

Linas Vepstas

ungelesen,
07.10.2017, 10:12:2007.10.17
an opencog, ivan....@gmail.com
I agree w/ Ben. There is a significant risk that modularization will lead to premature optimization.  This is a trap that opencog has repeatedly fallen into: whizzy nice looking architectures that were just-plain wrong and had to be ripped out.  Over-engineering something too early, before the concepts are clear, just leads to longer-term failures.

For the last ten years, I've been thinking of the atomspace as a "graph database". In the last 6+ months, I'm starting to realize that this is the wrong viewpoint. I think I know a better one, but it takes some explaining.

The point is: if you think of it as a "graph database", you will design things to work in a certain way, you will optimize in a certain way, and you will optimize it in a way that it will work poorly with the actual algorithms/actual data that we need.  And that is a potential disaster.

In fact, I'm sort-of worried that Ben might go out an hire someone to create some "optimized atomspace", and not tell me, and that person will create a whizzy clearn architecture that is just plain wrong. And then I'll argue and say ugly mean things and everyone will get mad (at me). 

Much of what I do is still an exploration, false starts need to be quickly discarded, rather than being set in the hard stone of "modular architecture".

--linas

> To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAMyYmr8UnF1vr7BwF%2BY53xdyihdLbCEL72mJ9DdkOVAFng4Zdw%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.



--

Nil Geisweiller

ungelesen,
08.10.2017, 03:54:3908.10.17
an linasv...@gmail.com, Nil Geisweiller, opencog
On 10/07/2017 04:39 PM, Linas Vepstas wrote> So here's a completely
different but related idea: First, use a crisp
> reasoner to deduce what happens whenever strength>0.9999. Next, do it
> again, but now for strength>0.8. (but still using the crisp reasoner:
> just take strength>0.8 to mean "true"). This should have a "broader" set
> of consequences. Do it again for strength>0.6 - this causes even more
> possibilities to be explored.
>
> It seems like these three cases can be treated as "lower bounds" of what
> we might expect PLN to find. That these could be used to guide/limit
> what PLN explores.
>
> Alternately, if this was fast enough, you could do this 100 times for
> 100 different truth cutoffs, and build up a distributional TV...

That's an interesting idea. You could

1. Sample the probability of each atom in the KB (axioms) according to
their TV
2. Sample, according to this probabilities, whether the axiom is true or
false
3. Run crisp-PLN over the dicretized theory, save the output
4. repeat 2. N times to obtain a probability of the output
5. repeat 1. M times to obtain a second order probability to regenerate
the output TV

I suppose this type of crisp-PLN monte-carlos simulation should converge
to PLN. The advantage could be real though, assuming PLN complexity
grows with exp(alpha*L), and the complexity of crisp-PLN grows with
exp(beta*L), with beta < alpha, L being the length of the proof, we'd
reach a point where where M*N*exp(beta*L) < exp(alpha*L).

Certainly an idea to keep in mind.

Nil

Curtis Michael Faith

ungelesen,
08.10.2017, 04:15:2108.10.17
an opencog
I've been thinking about these issues for quite some time. It is my favorite kind of work. How does one simplify complexity? How can one improve a system's expressiveness and performance at the same time? How can one build better systems with less work and greater reliability?

There are three major recent innovations in thinking in my head, some echoed by similar expressions from Linas in his recent posts on "sheafs" for natural language grammar induction.

1) We've been thinking in nouns when verbs have all the connectivity. Thinking of nodes as the thing, and not the pipes of semantic connection between them. We've been thinking of set membership, i.e. of connections, when we should been thinking of the semantic flows from and towards. In more concrete hyper graph-terms, we need edges that have an ontology of connection types and categories with semantic implication. The type of connection or relationship between the nodes is the source of all semantic content. Nodes in a vacuum mean nothing.

2) The natural unit is a directed subgraph defining a precise semantic category instance. Some with an inflowing topology some with an outflowing, but each with a core node, the verb of the connection, the semantics represented by the presence of the connection itself; or the key noun with an associated attribute set. These subgraphs must include and support edges only partially connected either into, or out of the core, akin to + or - in Link Grammar terms.

3) We should be thinking of semantic flows instead of the atoms of state that result from changes in flows. Semantic flows represent isomorphisms of state applied to a sub-graph. This happens to be the starting point for SingulairyNET agents. What flows in as data and what flows out as results? So we are working on this problem at the high-level while I and others are thinking about how best to represent the abstract constructs definable in atomese.

In many ways, I see these times for AGI as akin to the early days of programming when we first made the jump from machine language to assembly language and then we got C and Fortran and Cobol, with the semantics tied much more closely and directly to the problem domain: whether systems- or scientific- or business-programming.

So I see OpenCog Atomese as the assembly code for Sophia's mind. We want it to stay flexible because we do not want to limit what is possible. But it is too much work to write in assembly all the time. we need compressions of complexity and a higher-level form for more efficient and expressive work at higher levels.

We have not yet built, anyone anywhere yet, the semantic analog to C for AI, let alone the more modern variants like Go, Rust, Swift or even Python. There is an impedance mismatch between the ways that current batch-oriented Von-Neumann bottlenecked systems run and the ideal ways that a mind wants to learn in parallel. There is a greater need for efficient shared semantic context among the many parts communicating. There is a greater need for visualization into the implications and nuanced semantics implied by the connections.

Much work to be done but all identified and doable.

Linas Vepstas

ungelesen,
08.10.2017, 04:43:4208.10.17
an Nil Geisweiller, opencog
Ah, well, in this rephrasing, you've converted it into a probabilisitic-programming problem.  I recall both you and I were at AGI 2015 and there were several papers on this, and clearly work stretching back a decade or more ... but that work, those papers were always focused on programming languages, and not on logic.  I want to smack my forehead and say "but of course!" and wonder/marvel how it is that this hasn't been done before (maybe it has been, and we don't know?)

Ben was mumbling something to me about adding probabilistic programming to opencog, but I did not understand what he was trying to achieve.  This, by contrast, seems to be a well-defined, well-contained problem, which could give some decent results.  It also has the benefit of allowing you to start with low values of M,N to get a rough estimate, and refine over time.

So, the question is: what's the base tech?  Starting with SAT solvers seems like too low a level.  I like answer-set programming (ASP) because it explicitly deals with first-order logic and therefore is a natural fit for PLN.  (and of course, the ASP solvers are now blazingly fast).   A third possibility would be a theorem prover, like Coq or whatever, but these might be a poor fit for PLN. I dunno.

Certainly an idea to keep in mind.

More, my knee-jerk reastion is to say "its an idea we should prusue".

To return to the original thread: in pursuing this idea, how much of it should be developed as in "independent module", and how much should be integrated with opencog?   Certainly, it would be dumb to re-invent atoms yet again, but the idea of a stand-alone module seems to ask for that.

--linas


Nil



I find this idea exciting!  It seems plausible, doable ...

--linas



    Nil

Linas Vepstas

ungelesen,
08.10.2017, 05:04:5408.10.17
an opencog
Hi Curtis,

On Sun, Oct 8, 2017 at 11:15 AM, Curtis Michael Faith <curtis....@gmail.com> wrote:


3) We should be thinking of semantic flows instead of the atoms of state that result from changes in flows. Semantic flows represent isomorphisms of state applied to a sub-graph.

Not to make your head explode, or anything like that, but there is some synchronicity in the universe.  Yesterday's news includes an obit for Vladimir Voevodsky https://plus.google.com/u/0/+johncbaez999/posts/VhWp7s1PYp3 which contains remarks similar in flavor: "infinity-groupoids are sets in the next dimension".

Voevodsky is known for instigating work on the HoTT book -- Homotopy Type Theory -- which basically shows how computer programs, logical proofs and similar "discrete" networks can be "continuously" transformed homotopically, isomorphically into one-another.  It is this work that has revolutinzed theorem provers (such as Coq, or Ben's favorite, Agda) in mathematics.  I've been trying to steal ideas from that general area and apply them to opencog/atomese.

The book is free, and anyone interested in what a "type" is should read at least the first few chapters. The types of that book are more-or-less exactly the same thing as the types in opencog, or the types in link-grammar.

 
This happens to be the starting point for SingulairyNET agents. What flows in as data and what flows out as results? So we are working on this problem at the high-level while I and others are thinking about how best to represent the abstract constructs definable in atomese.

I promised earlier to write a smart contract in atomese, and I haven't forgotten that promise. The current stumbling block is how to define a container in atomese. (a container being a secure "sandbox" in which crypto operations can be safely performed away from the eyes of spies.) 

In many ways, I see these times for AGI as akin to the early days of programming when we first made the jump from machine language to assembly language and then we got C and Fortran and Cobol, with the semantics tied much more closely and directly to the problem domain: whether systems- or scientific- or business-programming.

So I see OpenCog Atomese as the assembly code for Sophia's mind. We want it to stay flexible because we do not want to limit what is possible. But it is too much work to write in assembly all the time. we need compressions of complexity and a higher-level form for more efficient and expressive work at higher levels.

We have not yet built, anyone anywhere yet, the semantic analog to C for AI, let alone the more modern variants like Go, Rust, Swift or even Python.

Yes.

--linas
 
There is an impedance mismatch between the ways that current batch-oriented Von-Neumann bottlenecked systems run and the ideal ways that a mind wants to learn in parallel. There is a greater need for efficient shared semantic context among the many parts communicating. There is a greater need for visualization into the implications and nuanced semantics implied by the connections.

Much work to be done but all identified and doable.


Amirouche Boubekki

ungelesen,
08.10.2017, 06:43:1208.10.17
an ope...@googlegroups.com, ivan....@gmail.com
On Sat, Oct 7, 2017 at 4:12 PM Linas Vepstas <linasv...@gmail.com> wrote:
 
For the last ten years, I've been thinking of the atomspace as a "graph database". In the last 6+ months, I'm starting to realize that this is the wrong viewpoint. I think I know a better one, but it takes some explaining.

What is your new viewpoint?

Linas Vepstas

ungelesen,
08.10.2017, 07:40:1908.10.17
an Nil Geisweiller, opencog
Ah, well, in this rephrasing, you've converted it into a probabilisitic-programming problem.  I recall both you and I were at AGI 2015 and there were several papers on this, and clearly work stretching back a decade or more ... but that work, those papers were always focused on programming languages, and not on logic.  I want to smack my forehead and say "but of course!" and wonder/marvel how it is that this hasn't been done before (maybe it has been, and we don't know?)

Ben was mumbling something to me about adding probabilistic programming to opencog, but I did not understand what he was trying to achieve.  This, by contrast, seems to be a well-defined, well-contained problem, which could give some decent results.  It also has the benefit of allowing you to start with low values of M,N to get a rough estimate, and refine over time.

So, the question is: what's the base tech?  Starting with SAT solvers seems like too low a level.  I like answer-set programming (ASP) because I know it well, and it explicitly deals with first-order logic and therefore is a natural fit for PLN.  (and of course, the ASP solvers are now blazingly fast).   A third possibility would be a theorem prover, like Coq or whatever, 

Certainly an idea to keep in mind.

Nil



I find this idea exciting!  It seems plausible, doable ...

--linas



    Nil

Linas Vepstas

ungelesen,
08.10.2017, 07:47:0408.10.17
an opencog, Ivan Vodišek

... a simple and convenient mechanism for working with graphs "locally", by making the nearest-neighbors of a vertex apparent.

The traditional textbook-canonical way of specifying a graph is to state that it is a set of vertexes, and a set of edges that connect pairs of vertexes. The problem with this description is that given any vertex, one has no idea of what edges are connected to it, without scanning the entire set of edges. Another problem is that vertexes and edges are not composable; that is, when they are composed together, they are no longer vertexes or edges, but a more general type: a "subgraph". By contrast, sheaves carry local information, and are composable.

Given a vertex V, a "section" is defined as a set of pairs (V,E) of that vertex V and all edges E that are attached to it. That's it! Very simple! A section can be envisioned as a "spider", with the vertex V as the body of the spider, and the edges as the legs of the spider.

Sections are composable, in that several can be connected together by joining ("connecting") edges. The result is still a section, in that it has a central blob as the spider-body, and a whole bunch of legs sticking out. Composing sections in such a way that the edges connect only in legal ways is called "parsing".

Another way of visualizing sections is to envision a jigsaw-puzzle piece instead of a spider. The vertex V is a label on the puzzle-piece, and each leg is a tab or slot on the puzzle-piece. The tabs or slots are now obviously connectors: this emphasizes that jigsaw-puzzle pieces can be connected together legally only when the connectors fit together. Again: the act of fitting together puzzle-pieces in a legal fashion is termed "parsing".

In standard mathematical terminology, the spider-body or jigsaw-label is called the "germ". It is meant to evoke the idea of a germinating seed, as will become clear below.

Diagramatic illustrations of jig-saw puzzle-pieces can be found here:


--linas

Nil Geisweiller

ungelesen,
09.10.2017, 05:57:4609.10.17
an linasv...@gmail.com, Nil Geisweiller, opencog
On 10/08/2017 11:43 AM, Linas Vepstas wrote:
> So, the question is: what's the base tech? Starting with SAT solvers
> seems like too low a level. I like answer-set programming (ASP) because
> it explicitly deals with first-order logic and therefore is a natural
> fit for PLN. (and of course, the ASP solvers are now blazingly fast).
> A third possibility would be a theorem prover, like Coq or whatever, but
> these might be a poor fit for PLN. I dunno

They might all be OK, depending on the task. The problem I'm seeing is
how to turn a backward chainer query *with variables* into theorem(s) in
these formalisms.

I guess I would know how to turn

Evaluation P A

where P and A are fully defined into a Coq theorem, but what if A is
replaced by X

Evaluation P X

and we want to find inference chains instantiating as many X so that P(X).

Can these tools do that?

I suppose ASP can. But can a general automatic prover like Coq can? I
don't know.

I would be tempted to try first with a crisped version of PLN itself, as
this would require almost no effort.

Of course existing tools can be a lot more efficient than crisp-PLN, at
least for some tasks, I doubt for everything though. For that,
ultimately nothing is gonna beat meta-learning I believe, so that would
be my only reserve for spending time on these other tools. But I agree
that it's a very interesting pursue.

Nil

Linas Vepstas

ungelesen,
09.10.2017, 13:56:1109.10.17
an Nil Geisweiller, opencog
The full generality of Coq might not be needed.

I've coded some fairly large systems in ASP, so I feel very comfortable with it. Its a constraint-solving system, and if you've never worked with a constraint system, it can be weird and confusing to learn how to use it. It took me weeks and lots of baby examples before I finally "got it", after which its "obvious".

The nature of constraint systems is that they are neither forward nor backward chainers: they are solvers.  They use the Davis-Putnam algo -- this is boat-loads faster than backward/forward chaining, since it automatically prunes all possible branches that cannot be true.  If reduces the problem to just one tightly connected network, and then all possible truth-value assignments can be explored exhaustively. This makes it blindlngly fast - literally thousands or millions or billions of times faster than backward/forward chaining.  It revolutionized the entire industry.  I've used it to simulate 32-bit cpus: so you could explore e.g. 2^64 or 2^80 possiblities in milliseconds because it could prune effectively all of that combinatorial explosion to a tight nucleus of a few hundred or a few thousand possibilities, which can be exhaustively explored.

I assume that the theorem provers use similar algos, if not exactly the same algo.

This is one reason that its probably a waste of time to try to write a crisp backward-forward chainer for PLN -- chaining is essentially an obsolete technology, for crisp logic.  I suppose I'm overstating this, but really, based on everything I've read, the bad-old days of prolog and circuit simulators are over, and there's no point in going back there again.

--linas
Allen antworten
Antwort an Autor
Weiterleiten
0 neue Nachrichten