AGI-Conference 2015 follow up

81 views
Skip to first unread message

Tony Lofthouse

unread,
Jul 26, 2015, 4:40:21 AM7/26/15
to open...@googlegroups.com

Hi all, I wanted to give a brief list of NARS highlights from the AGI-Conf for those that couldn’t make it in person.

 

NARS was well represented at the conference with Patrick Hammer, Robert (sorry Robert I can’t remember how to spell your last name) and myself (Tony Lofthouse). There may have been others but if so we didn’t manage to connect at the event.

 

There appeared to be a good deal of interest in NARS with several people seeking me out for further information. It was particularly pleasing to see a growing recognition of underlying NARS AGI philosophy, The Aera Project (Reykjavik University) seems particularly in line with many of the NARS principles.

 

Let me begin with an assessment of the NARS related papers

 

There were two presented papers related to NARS, “Assumptions of Decision Making Models in AGI (Pei Wang and Patrick Hammer)” and “Issues in Temporal and Causal Inference (Pei Wang and Patrick Hammer)”, there was also a workshop session on ‘The self model’ where the NARS approach was explained, all of which Patrick presented.

 

There was also another general Paper that Pei had co-authored; “Safe Baby AGI (Jordi Bieger, Kristinn R. Thorisson, and Pei Wang); there seemed to be a broad consensus that it was not practical (or even possible) to constrain the ‘nature’ of an AGI given AIKR; it generally seemed to be accepted that ‘nurture’ was the practical direction on this.

 

This was Patrick’s first time presenting at a conference and he grew in confidence with each delivery. By the third presentation, Self-reference in NARS, he was very capably handling a somewhat adversarial challenge from Ben Goertzel. In my view he managed to deal with Ben’s main objection very well.

 

There were two challenges to the presentation; firstly, A pre-defined ‘Self’ node/concept wasn’t cognitively realistic and this was not how children appeared to develop a sense of self (from Ben), and secondly, that the NAL logic itself ‘didn’t work anyway’ (also from Ben).

 

Patrick did an excellent job of defending the position of NARS self-reference. His response was that NARS COULD develop a sense of self through experience of cause and effect, i.e. if NARS carries out an operation that causes an observable effect, the system can infer through temporal inference that the operation was caused by ‘a self’, this ‘self’ concept would grow in richness over time by the inclusion of additional experience, however, by introducing a ‘Self’ concept explicitly, the learning of a ‘self’ model can be much more efficient.

               

The second challenge is more difficult to deal with, Ben is clearly of the opinion that the NAL inference rules do not work. This is not new but I think it needs to be dealt with. I know Ben and Pei have discussed this many times!

 

I believe this is an issue for a couple of reasons; firstly, Ben is very influential in the AGI community and if his opinion is that NAL does not work this will have an effect on adoption of the principles embodied in NARS – this would be a great tragedy, secondly, and more importantly, we have to be sure that he is not right!

 

Ben’s basis for his argument is that in 2005 (or thereabouts) he encoded the NAL inference rules and ran a set of tests, on various datasets, to evaluate the NAL capability compared to a probabilistic approach. His conclusion was that the ‘heuristics’ for induction and abduction were wrong, although deduction was broadly in line with his expectation.

 

I have challenged his conclusion with a couple of arguments; one, the inference rules were redesigned in 2013 in a well-grounded formalism (Wang, P. Non-Axiomatic Logic 2013),  so the previous analysis is no longer applicable, and secondly, the basis of his comparison is flawed in that NAL is not designed to be a probabilistic inference logic, so comparing it to one does not make sense. I positioned NAL as a ‘cognitively’ reasonable logic and that NARS is designed to be a tool user just as humans are. I accepted that NARS could not generate precise probabilistic results from large datasets (unaided), but this is the same as the human mind, and this is by design not a failure of the logic. If you want precise probabilistic results then you provide NARS with an appropriate sets of tools. I think the core of Ben’s objection is that inductive and abductive inference will, over time, lead to nonsense results in the belief network. This is a valid objection and I think we need to investigate this possibility.

 

Personal highlight for me was “Modelling Motivation in MicroPsi 2 (Josha Bach)”

 

Have fun

Tony

Tony Lofthouse

unread,
Jul 26, 2015, 8:37:09 AM7/26/15
to open...@googlegroups.com

FYI

 

From: 王培 [mailto:mail_p...@163.com]
Sent: 26 July 2015 13:07
To: tony_lo...@btinternet.com
Subject: Re:[open-nars] AGI-Conference 2015 follow up

 

Hi, Tony,

 

Currently I cannot directly post to the group, so please forward this message to the group for me.

 

Thanks for the nice and informative report. It's a pity that I was unable to attend AGI-15, but I'm glad to know that our project was well presented. I've also heard from other people about the good job you guys have done.

 

I didn't know the "self model" workshop. I'm glad that Patrick can put something together without much time to prepare. Which other projects have been presented?

 

As you have seen, the AERA team has been the closest collaborator with us in the recent years, and this cooperation will go on. One focus will be on "education".

 

As for the debate between Ben and me, it has continued for at least 15 years, both in private and in public. In several cases it was hot, though we still consider each other as friends, and cooperate on topics where we agree (such as fighting against mainstream AI). The most recent exchange on this matter happened in the AGI-13 workshop "Probability theory or not" (http://www.agi-conference.org/2013/workshops/). Unfortunately the papers and videos are no longer at the workshop website, but my paper can be found at at http://www.cis.temple.edu/~pwang/Publication/probability.pdf, and the video of my talk may still be somewhere online (Youtube?). The talks of the others may be finAfter the workshop I suggested to Ben and Marcus that we do a special issue at JAGI to compare our opinions on this important topic in a more organized way, but both of them declined. Since Ben still holds his opinion (as you said, it is basically "NAL is wrong because it does not follow probability theory") and express it publicly, I'll find another opportunity to settle this with him. ;-) 

 

Anyway, I'm glad that you have enjoyed AGI-15 -- even though the conference series has its problems, it is still more interesting than the other academic conferences I have attended. I don't know whether it has been announced during AGI-15 or not, but according to the current plan, AGI-16 will be in New York City in early July 2016, jointly with a few other conferences (such as BICA), to celebrate the 60th anniversary of Dartmouth Meeting, which was closer to AGI than the current AI meetings. I hope more people in the team will be there, with or without papers. I'll make sure not to miss that one -- at least I won't need a visa to go to NYC. ;-)

 

Best regards,

 

Pei

--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
Visit this group at http://groups.google.com/group/open-nars.
For more options, visit https://groups.google.com/d/optout.

Tony Lofthouse

unread,
Jul 26, 2015, 8:50:03 AM7/26/15
to open...@googlegroups.com

To answer Pei’s question regarding which other projects were presented -  as far as I recall the list is as follows;

 

AERA

Sigma (SOAR derivative)

MicroPSI 2

OpenCog

+

Brain Simulator (GoodAI) more a development environment than an actual AGI though

 

If anyone recalls any other systems please shout.

 

Regards

Patrick Hammer

unread,
Jul 27, 2015, 6:52:38 PM7/27/15
to open-nars, tony_lo...@btinternet.com
@Tony: Nice summary! :)

It is interesting that Ben describes NARS conclusions as "heuristics" judged by "not following probability theory".
Besides NARS also PLN, differently than they seem to claim, does not follow probability theory like their "heuristic"
revision rule shows: Once a single rule from which further conclusions can be derived doesn't follow it,
the interpretation of their truth values as probability breaks down since the needed assumptions are not met anymore.
But this does not mean that their truth value calculations can not lead to useful
derivations for the system as long as it sufficiently keeps track of evidence. (strength measurement in PLN)
It would be interesting to shed more light on this topic, surely also for the OpenCog people,
I wonder why they were not interested in your suggestion to discuss this topic in more detail on JAGI,
altough it may be of importance to 3 AGI projects: OpenCog, AERA, NARS.

Best regards,
Patrick

Patrick Hammer

unread,
Jul 28, 2015, 3:06:17 AM7/28/15
to open-nars, tony_lo...@btinternet.com
Message from Pei:
"
For people who don't know the story: I and Ben worked together in 1998-2001 in Webmind where he got the funding to build a "thinking machine" (the name "AGI" hadn't been used at that time). The overall architecture was his design, which was a common platform supporting multiple "modules". The "reasoning module" was a customized version of NARS designed by me (based on my PhD dissertation). From the beginning Ben (trained as a mathematician) had the feeling that reasoning should follow probability theory, though he didn't have a design to compete with mine. Some attempts had been made in Webmind to re-interpret NARS according to probability theory (As I remember, the task was once assigned to Shane Legg, who later co-founded DeepMind), though not successfully.

After Webmind was dissolved in 2001, Ben later started Novamente, then OpenCog, based more or less on the ideas and lessons from Webmind. The "reasoning module" evolved into PLN, which kept some resemblance to its initial form (and therefore to NARS), but the truth-values are defined and handled "according to probability theory". Whenever a calculation cannot be derived from probability theory, some "heuristics" are added, and the conclusions are taken to be "approximations of probability theory". Ben insists to interpret NARS in the same way, and claims that the NAL rules, especially induction and abduction, are wrong, since the same rules in PLN use very different ("probability theory based") truth-value functions. In those years some comparisons between the two models were carried out, though the results are interpreted very differently. It is hard, if not impossible, to compare the two models, because the lack of a common foundation. When a concrete problem is given to the two systems, and they produce different results, what is the standard to decide which one is better? This problem is not as easy as it looks.

My opinion in this matter has be presented in several publications. In summary, it's the following points:

(1) Probability theory, in its standard form, cannot be used for uncertain reasoning in the AGI context, since its knowledge and resource demands cannot be satisfied.

(2) NAL is not an approximation or extension of probability, though the two models have shared intuitions, even conclusions, here or there.

(3) Since PLN does not respect the axioms of probability theory (for example it allows inconsistent probability assignments), it cannot be legally claimed to be "based on probability theory". It cannot be taken as an approximation of probability theory, neither, because any "approximation" must be qualified with an evaluation about how the actual results differ from the optimal/correct results. You cannot use a theory in some place, and in other places use your own tricks, then still claim to be faithful to that theory.

Ben is undeniably very smart and has a lot of interesting ideas, but by my standard, he never has the patience to carefully fits his ideas (and the ideas of the others that he quickly absorbs) into a coherent whole. Instead, he believes that intelligence emerges from the "synergy" of different techniques and ideas (see his Cognitive Synergy: A Universal Principle for Feasible General Intelligence?), and this attitude also shows in this discussion. I have told him several times that my biggest disagreement with PLN is not in its truth-value functions (which is a secondary issue), but in its semantics -- to me, the truth-value (call it "strength" or whatever) in PLN is never clearly defined, so its treatment in different places of the model is justified according to different understandings. Though each formula may look reasonable in isolation, their consistency will cause trouble in the long run.

Frankly, I don't enjoy criticizing other AGI projects in public, simply because the field is still young, and all projects have obvious weakness. However, if someone explicitly claimed that NARS is "wrong", I don't mind to answer. That is why so far I've only criticized OpenCog and AIXI in my publications.

Regards,

Pei
"

Jarrad Hope

unread,
Jul 28, 2015, 5:06:30 AM7/28/15
to open...@googlegroups.com
The criticism seems like it should have been left in the late 90's - all media I've read and watched on NARS/NAL clearly states those points.

I mean it's been 15 years and if Ben cannot see the project on it's own terms then it only demonstrates inflexibility in his thinking.

My concern is for newcomers, my personal experience is viewing Ben as an influential figure and his opinions initially coloured my view of the projects in AGI. I understand his goal is to get people working on OpenCog, but after you take a look at the codebase you can't help to feel the AGI field is a little less legitimate as one once thought.

--

Patrick Hammer

unread,
Aug 1, 2015, 12:29:30 PM8/1/15
to open-nars
@Jarrad: Yes this is true, Pei created several publications about this topic.
I for myself, see evidence counting as a more fundamental and well justifiable process compared to using probability theory without its assumptions being met in the hope that the error will be small enough so that the results will still be useful. But I also understand Ben's views, because sometimes things work well even if the assumptions the principles were designed to work under are not fully met. I think we should cooperate with the OpenCog people here to lighten up the topic.

And it's a very difficult topic, for example if we consider some human thought fallacies which can be reproduced by NARS:
That these can be reproduced doesn't show us whether we are right or wrong, they only show that a NARS system tends to make the same mistakes/symptoms as humans do.
But whether they are "mistakes", or even inevitable properties of the reasoning of intelligent systems, this question stays open.
But what I think is that humans are pretty intelligent, so if a system can reason like a human does, at least I would be happy. ^^

But back to lightening up the topic and away from personal opinions, I think these are some points which could help here:

1. None of us thinks that probability theory is "wrong", it isn't, we are just worried about what happens if it is used altough its assumptions are not met, and worried about the implications of too less resource to apply it.
Probability is the way to go once a probability space of the domain one reasons about is known and the resource to apply it is sufficient, this no one of us questions.
So for example if NARS plays a simple card game with for example a dice involved, a well educated NARS system will likely beat another NARS system which doesn't know about probabilities.
This would be an interesting aspect, showing that NARS can acquire principles of probability theory and demonstrate using it in cases where the resource suffices.
This is beyond just applying it, it means the system has to identify when and how the rules of probability theory can be used, and also has to identify whether it is practicable to do so.
Some months ago I successfully tried a little coin experiment based on this idea, I wonder if we should try more into this direction.

2. How can we justify and compare different uncertainty measurements which are based on different assumptions in such a way that the relationship between these becomes more clear?

3. What uncertainty measurements currently in literature are compatible with "insufficient knowledge and resource" assumption, and if they aren't, can they at least be approximated with a known error bound which is itself compatible with the "insufficient resource" assumption?

4. Clearly showing why "insufficient knowledge and resources" is indeed a fundamental assumption an AGI system has to make. (But I think this was already successfully stated, for example resource bounds were to my surprise an extra topic at the AGI conference)

Best regards,
Patrick









Best regards,
Patrick
Reply all
Reply to author
Forward
0 new messages