Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mentifex Visualizes the Perl AI MindGrid as Theater of Neuronal Activations

61 views
Skip to first unread message

menti...@gmail.com

unread,
Jul 6, 2016, 11:33:25 PM7/6/16
to
Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).
At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,
at t=2426, 707=YOU has a negative "-46" activation.
At t=2430, 820=SEE has a negative "-46" activation.
At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46" points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of thought of the AGI.

--
http://ai.neocities.org/perlmind.txt
http://ai.neocities.org/PMPJ.html#news
http://wiki.opencog.org/wikihome/index.php/Ghost
http://www.sourcecodeonline.com/details/ghost_perl_webserver_strong_ai.html

menti...@gmail.com

unread,
Aug 21, 2016, 12:19:54 PM8/21/16
to
On Wednesday, July 6, 2016 at 8:33:25 PM UTC-7, menti...@gmail.com wrote:
> Recently we have developed the ability to visualize
> the MindGrid as Theater of Neuronal Activations.

FLASH NEWS UPDATE

https://news.ycombinator.com/item?id=12330663

is where someone has posted a link to the

http://ai.neocities.org

Ghost Perl Webserver Strong AI.

http://github.com/PriorArt/AGI/blob/master/ghost.pl

is the ghost175.pl free AI source code now on GitHub.

http://ai.neocities.org/forthagi.txt

is the ongoing port of the Robot AI Mind
from Strawberry Perl5 back into Win32Forth,
so that the basic AI program, which has to
stop and wait for user input in Perl,
may think ceaselessly and immortally in Forth.

Cheers,

Mentifex (Arthur)

menti...@gmail.com

unread,
Sep 1, 2016, 1:20:12 AM9/1/16
to
Today 2016-08-31 was a major day in the prior-art, free-of-charge,
open-source Robot AGI Project for Artificial General Intelligence.
I spent part of July and all of August 2016 porting Ghost Perl AI

http://github.com/PriorArt/AGI/blob/master/ghost.pl

back into Forth AI for Robots. I retired the old MindForth by archiving it
as 24jul14A.F so that amateur and professional roboticists who visit

http://www.nlg-wiki.org/systems/Mind.Forth

will find the new MindForth which replaces the obsolete MindForth with

http://ai.neocities.org/mindforth.txt

and also today in the new MindForth I stubbed in GusRecog, OlfRecog
and TacRecog as special AGI mind-modules for roboticists to work on.
Although I am still fine-tuning the Robot AGI, it is a visibly thinking

http://github.com/PriorArt/AGI/wiki/MindGrid

such that the user can see the internal workings
of the AI Mind as it comprehends user input and
it generates a robot thought.

Arthur
--
http://robots.net/person/AI4U
http://ai.neocities.org/AiSteps.html
http://mind.sourceforge.net/theory5.html
http://aihub.net/artificial-intelligence-lab-projects

Henry Law

unread,
Sep 1, 2016, 6:40:21 AM9/1/16
to
On 01/09/16 06:20, menti...@gmail.com wrote:

> I spent part of July and all of August 2016 porting Ghost Perl AI
>
> http://github.com/PriorArt/AGI/blob/master/ghost.pl

5,823 lines of the worst Perl code I've ever seen, at least since the
last time I looked at your stuff.

> if ($c12 ne "") { # 2016apr03: if the word is only 12 characters
> $b16=$c12; $b15=$c11; $b14=$c10; $b13=$c09; $b12=$c08; $b11=$c07;
> $b10=$c06; $b09=$c05; $b08=$c04; $b07=$c03; $b06=$c02; $b05=$c01;
> $b04=""; $b03=""; $b02=""; $b01=""; # 2016apr02
> return; # 2016apr02: abandon remainder of function;
> } # 2016apr02: end of transfer of a 12-character word;
> if ($c11 ne "") { # 2016apr03: if the word is only 11 characters
> $b16=$c11; $b15=$c10; $b14=$c09; $b13=$c08; $b12=$c07; $b11=$c06;
> $b10=$c05; $b09=$c04; $b08=$c03; $b07=$c02; $b06=$c01; $b05="";
> $b04=""; $b03=""; $b02=""; $b01=""; # 2016apr02
> return; # 2016apr02: abandon remainder of function;
> } # 2016apr02: end of transfer of an 11-character word;
> if ($c10 ne "") { # 2016apr03: if the word is only 10 characters
> $b16=$

You've got to be kidding, right?

--

Henry Law Manchester, England

menti...@gmail.com

unread,
Sep 1, 2016, 8:07:01 AM9/1/16
to
OK, so I am not a genuine programmer; I am more of a
human languages geek. But please remember this:
"There is more than one way to do it."

The Perl quasi-array code that you have cited above
is my own TIMTOWTDI way of not only creating a kind
of "buffer" array -- $b01, $0b2, ..., $b16 -- but also of
making it easy for me to visuale what is happening
in that $OutBuffer array, which holds English or Russian
words right-justified against a kind of Larry, I mean, wall,
so that the Perl AI code can manipulate the inflectional
endings up against the right-most wall. It works.
The ghost175.pl AI program successfully manipulates
Russian verb-endings, stripping off inappropriate
endings and attaching an ending required by the
parameters of grammatical person and number.

http://ai.neocities.org receives many visitors from
the Russian Federation, some of whom, I hope,
are looking at my Russian AI code in Perl and
JavaScript and perhaps making an effort to
develop it further.

Upthread on 2016-07-06 I was musing about how to
coordinate and orchestrate events occurring in the

http://github.com/PriorArt/AGI/wiki/MindGrid

of the Perl AI. It was functioning so well that I
felt frustrated at having to let the Perl program
stop and wait for human user input. I wanted
to see the same cognitive architecture start thinking
and not stop for anything, so I began porting the
Perl AI back into its original Win32Forth. It was an
obsessive project on my part, and by now I have
replicated most of the Perl AI functionality.

http://wiki.opencog.org/wikihome/index.php/Ghost
in Strawberry Perl 5 is such a vast improvement over
the original MindForth AI that I could no longer bear
knowing that my Forth AI program was obsolete and
substandard compared not with the Ghost175.pl AI
as Perl code (your opinion of it as code is justified)
but rather as a functioning artificial intelligence.

Now in September 2016 and beyond I am making a
big play for the robotics people to examine the Perl
and Forth AI code and possibly build upon it to
implement robot sensory inputs and motor outputs.
Forth was long a major amateur robotics language.

So no, I am not kidding. Let us wait and see.

Thank you for looking at the admittedly awful
Perl AI code. I apologize for it as Perl code,
but not for its functionality as an AI Mind.

Respectfully submitted,

Arthur T. Murray
--
http://www.linkedin.com/in/mentifex
http://www.advogato.org/person/mentifex
http://dl.acm.org/citation.cfm?doid=307824.307853
http://www.cpan.org/authors/id/M/ME/MENTIFEX/mind.txt

John Black

unread,
Sep 1, 2016, 5:40:59 PM9/1/16
to
In article <rvGdnXRDy4YMmFXK...@giganews.com>,
ne...@lawshouse.org says...
Henry, you do know he's a troll right?

John Black
0 new messages