Hi all, I wanted to give a brief list of NARS highlights from the AGI-Conf for those that couldn’t make it in person.
NARS was well represented at the conference with Patrick Hammer, Robert (sorry Robert I can’t remember how to spell your last name) and myself (Tony Lofthouse). There may have been others but if so we didn’t manage to connect at the event.
There appeared to be a good deal of interest in NARS with several people seeking me out for further information. It was particularly pleasing to see a growing recognition of underlying NARS AGI philosophy, The Aera Project (Reykjavik University) seems particularly in line with many of the NARS principles.
Let me begin with an assessment of the NARS related papers
There were two presented papers related to NARS, “Assumptions of Decision Making Models in AGI (Pei Wang and Patrick Hammer)” and “Issues in Temporal and Causal Inference (Pei Wang and Patrick Hammer)”, there was also a workshop session on ‘The self model’ where the NARS approach was explained, all of which Patrick presented.
There was also another general Paper that Pei had co-authored; “Safe Baby AGI (Jordi Bieger, Kristinn R. Thorisson, and Pei Wang); there seemed to be a broad consensus that it was not practical (or even possible) to constrain the ‘nature’ of an AGI given AIKR; it generally seemed to be accepted that ‘nurture’ was the practical direction on this.
This was Patrick’s first time presenting at a conference and he grew in confidence with each delivery. By the third presentation, Self-reference in NARS, he was very capably handling a somewhat adversarial challenge from Ben Goertzel. In my view he managed to deal with Ben’s main objection very well.
There were two challenges to the presentation; firstly, A pre-defined ‘Self’ node/concept wasn’t cognitively realistic and this was not how children appeared to develop a sense of self (from Ben), and secondly, that the NAL logic itself ‘didn’t work anyway’ (also from Ben).
Patrick did an excellent job of defending the position of NARS self-reference. His response was that NARS COULD develop a sense of self through experience of cause and effect, i.e. if NARS carries out an operation that causes an observable effect, the system can infer through temporal inference that the operation was caused by ‘a self’, this ‘self’ concept would grow in richness over time by the inclusion of additional experience, however, by introducing a ‘Self’ concept explicitly, the learning of a ‘self’ model can be much more efficient.
The second challenge is more difficult to deal with, Ben is clearly of the opinion that the NAL inference rules do not work. This is not new but I think it needs to be dealt with. I know Ben and Pei have discussed this many times!
I believe this is an issue for a couple of reasons; firstly, Ben is very influential in the AGI community and if his opinion is that NAL does not work this will have an effect on adoption of the principles embodied in NARS – this would be a great tragedy, secondly, and more importantly, we have to be sure that he is not right!
Ben’s basis for his argument is that in 2005 (or thereabouts) he encoded the NAL inference rules and ran a set of tests, on various datasets, to evaluate the NAL capability compared to a probabilistic approach. His conclusion was that the ‘heuristics’ for induction and abduction were wrong, although deduction was broadly in line with his expectation.
I have challenged his conclusion with a couple of arguments; one, the inference rules were redesigned in 2013 in a well-grounded formalism (Wang, P. Non-Axiomatic Logic 2013), so the previous analysis is no longer applicable, and secondly, the basis of his comparison is flawed in that NAL is not designed to be a probabilistic inference logic, so comparing it to one does not make sense. I positioned NAL as a ‘cognitively’ reasonable logic and that NARS is designed to be a tool user just as humans are. I accepted that NARS could not generate precise probabilistic results from large datasets (unaided), but this is the same as the human mind, and this is by design not a failure of the logic. If you want precise probabilistic results then you provide NARS with an appropriate sets of tools. I think the core of Ben’s objection is that inductive and abductive inference will, over time, lead to nonsense results in the belief network. This is a valid objection and I think we need to investigate this possibility.
Personal highlight for me was “Modelling Motivation in MicroPsi 2 (Josha Bach)”
Have fun
Tony
FYI
From: 王培 [mailto:mail_p...@163.com]
Sent: 26 July 2015 13:07
To: tony_lo...@btinternet.com
Subject: Re:[open-nars] AGI-Conference 2015 follow up
Hi, Tony,
Currently I cannot directly post to the group, so please forward this message to the group for me.
Thanks for the nice and informative report. It's a pity that I was unable to attend AGI-15, but I'm glad to know that our project was well presented. I've also heard from other people about the good job you guys have done.
I didn't know the "self model" workshop. I'm glad that Patrick can put something together without much time to prepare. Which other projects have been presented?
As you have seen, the AERA team has been the closest collaborator with us in the recent years, and this cooperation will go on. One focus will be on "education".
As for the debate between Ben and me, it has continued for at least 15 years, both in private and in public. In several cases it was hot, though we still consider each other as friends, and cooperate on topics where we agree (such as fighting against mainstream AI). The most recent exchange on this matter happened in the AGI-13 workshop "Probability theory or not" (http://www.agi-conference.org/2013/workshops/). Unfortunately the papers and videos are no longer at the workshop website, but my paper can be found at at http://www.cis.temple.edu/~pwang/Publication/probability.pdf, and the video of my talk may still be somewhere online (Youtube?). The talks of the others may be finAfter the workshop I suggested to Ben and Marcus that we do a special issue at JAGI to compare our opinions on this important topic in a more organized way, but both of them declined. Since Ben still holds his opinion (as you said, it is basically "NAL is wrong because it does not follow probability theory") and express it publicly, I'll find another opportunity to settle this with him. ;-)
Anyway, I'm glad that you have enjoyed AGI-15 -- even though the conference series has its problems, it is still more interesting than the other academic conferences I have attended. I don't know whether it has been announced during AGI-15 or not, but according to the current plan, AGI-16 will be in New York City in early July 2016, jointly with a few other conferences (such as BICA), to celebrate the 60th anniversary of Dartmouth Meeting, which was closer to AGI than the current AI meetings. I hope more people in the team will be there, with or without papers. I'll make sure not to miss that one -- at least I won't need a visa to go to NYC. ;-)
Best regards,
Pei
--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
Visit this group at http://groups.google.com/group/open-nars.
For more options, visit https://groups.google.com/d/optout.
To answer Pei’s question regarding which other projects were presented - as far as I recall the list is as follows;
AERA
Sigma (SOAR derivative)
MicroPSI 2
OpenCog
+
Brain Simulator (GoodAI) more a development environment than an actual AGI though
If anyone recalls any other systems please shout.
Regards
--