Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MindForth Programming Journal 2019-10-05

156 views
Skip to first unread message

menti...@gmail.com

unread,
Oct 5, 2019, 7:31:05 PM10/5/19
to
MindForth resets associative tags before each operation of Indicative module.

In the MindForth artificial intelligence (AI) for robots, we will now start to display an apparatus of diagnostic messages at the start of the Indicative module to tell us the values being held in variables which serve the purpose of creating associative tags to interconnect the concepts being expressed as English words during the operation of the Indicative mind-module. Since the ConJoin module will often insert a conjunction between two thoughts being generated, the AI Mind Maintainer needs assurance that variable-values operative during one thought do not erroneously get carried over past a conjunction into the separate process of generating another thought.

Just by resetting the tsj time-of-subject variable to zero at the start of the Indicative module, we have made the Forthmind able to trigger repeated instances of logical InFerence. Each running of the Indicative module amounts to a fresh declaration of the associative tags in the conceptual flag-panel that supports the generation of ideas in the MindGrid. The following dialog still has a few problems, but it shows the repeated triggering of an inference.

john is a student

DOES JOHN READ BOOKS
no


Indicative: tsj= 0 tvb= 0 tdo= 0
A JOHN DOES NOT READ BOOKS

Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS TEACH THE STUDENTS AND
Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS READ THE BOOKS

Indicative: tsj= 0 tvb= 0 tdo= 0
I UNDERSTAND YOU AND
Indicative: tsj= 0 tvb= 0 tdo= 0
YOU ARE A MAGIC
andru is student

DOES ANDRU READ THE BOOKS
no

Indicative: tsj= 0 tvb= 0 tdo= 0
AN ANDRU DOES NOT READ THE BOOKS AND
Indicative: tsj= 0 tvb= 0 tdo= 0
YOU READ THE BOOKS

Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS TEACH THE STUDENTS AND
Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS READ THE BOOKS

Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS READ THE BOOKS AND
Indicative: tsj= 0 tvb= 0 tdo= 0
I THINK

http://ai.neocities.org/mindforth.txt -- free AI source code.

Manuel Rodriguez

unread,
Oct 6, 2019, 5:14:47 AM10/6/19
to
Am Sonntag, 6. Oktober 2019 01:31:05 UTC+2 schrieb menti...@gmail.com:
> http://ai.neocities.org/mindforth.txt -- free AI source code.


The problem of the mindforth project is, that no grounding is
available. Language grounding means to connect a language environment
to a real world scenario. In case of Mindforth, the language model is
given in the sourcecode by the MindBoot() subfunction. The preprogrammed
dataset isn't compatible to a domain, for example an existing role playing
game. This makes it impossible to use Mindforth for creating a chatbot.

I'd like to give an example why language grounding is important
for an Artificial Intelligence. Suppose, no language grounding was
implemented. Instead the sourcecode was programmed from scratch. That
means, the game asks for a user input, is doing something with the entered
string and writes the output back to the command-line. From the program's
perspective it's working great, but for a user the software doesn't make
much sense. Because he doesn't want to learn what the software is doing,
but the user likes to utilize the program for solving it's own task. That
means, the user defines the requirements first and then the program has
to fulfill the needs.

Missing grounding capabilities is equal that a language model isn't
working inside the requirements of the user, but works autonomously. That
means, what the software has to offer isn't compatible to the user's
requirements.

MindForth and many other early projects in natural language processing
lacks in the requirement of become a narrow AI. A narrow AI is a software,
which fulfills external requirements. The seld-description of a Strong
AI is correct, and makes clear that language grounding wasn't implemented
yet.

A. K.

unread,
Oct 6, 2019, 5:28:01 AM10/6/19
to
If MindForth for starters could solve a classic blocks world AI problem...

Manuel Rodriguez

unread,
Oct 6, 2019, 7:33:10 AM10/6/19
to
Am Sonntag, 6. Oktober 2019 11:28:01 UTC+2 schrieb A. K.:
> If MindForth for starters could solve a classic blocks world AI problem...


What makes the blocksworld problem easy to understand is, that the
self-definition is transparent. Blocksworld defines themself as an
environment. It formulates a spatial problem. The blocksworld engine has
some rules, for example which blocks can fit together and the user has
the task to solve this game. That means, the blocksworld domain is the
problem, and the user has to provide intelligence to solve it. Basically
spoken, Blocksworld is not an Artificial intelligence, but it asks for
Artificial Intelligence.

What would happen, if somebody tries to redefine the blocksworld
problem? One option would be, to explain, that blocksworld is a General
Artificial Intelligence which is highly complex and can be utilized
to solve existing games like Chess. If the explanation is using a
certain academic language, the untrained audience will belief that
blocksworld can do lots of things. The user will start the chess engine,
then he activates the blockworld AI, and he is convinced, that this is
breakthrough in Artificial Intelligence research ...

menti...@gmail.com

unread,
Oct 6, 2019, 9:14:39 AM10/6/19
to
On Sunday, October 6, 2019 at 2:14:47 AM UTC-7, Manuel Rodriguez wrote:
> Am Sonntag, 6. Oktober 2019 01:31:05 UTC+2 schrieb menti...@gmail.com:
> > http://ai.neocities.org/mindforth.txt -- free AI source code.
>
>
> The problem of the mindforth project is,
> that no grounding is available.

http://en.wikipedia.org/wiki/Grounding_problem -- is indeed very important.

> Language grounding means to connect a language environment
> to a real world scenario.

For a real world scenario, MindForth needs embodiment in a robot. Forthers?

> In case of Mindforth, the language model is
> given in the sourcecode by the MindBoot() subfunction.

http://ai.neocities.org/MindBoot.html -- is just preliminary vocabulary.

> The preprogrammed dataset isn't compatible to a
> domain, for example an existing role playing game.

http://ai.neocities.org/NewConcept.html -- lets MindForth learn new words.

> This makes itimpossible to use Mindforth for
> creating a chatbot. [...]

Because MindForth learns new words, it is an excellent chatbot.

http://cyborg.blogspot.com/2019/10/mfpj1005.html -- provides embedded links.

http://www.mail-archive.com/a...@agi.topicbox.com/msg02544.html -- AGI list.

http://dl.acm.org/citation.cfm?doid=307824.307853 -- Association for Computing Machinery

Manuel Rodriguez

unread,
Oct 6, 2019, 6:33:33 PM10/6/19
to
Am Sonntag, 6. Oktober 2019 15:14:39 UTC+2 schrieb menti...@gmail.com:
> > http://ai.neocities.org/mindforth.txt -- free AI source code.
> For a real world scenario, MindForth needs embodiment in a robot. Forthers?
> Because MindForth learns new words, it is an excellent chatbot.


Perhaps we have to define what a chat bot is, and what “learning of new
words” mean. A chatbot is a computer program which fulfills external
requirements. An example requirement is: talk about the weather. Is
Mindforth able to do a smalltalk about the weather? Not yet. Instead,
Mindforth asks the Forth programmer to develop a standalone program or
a subroutine for Mindforth which implements a chatbot.

The same is true for another feature: learning of new words. Learning
means, that the software improves itself by storing new vocabulary into
the main memory. And with this definition in mind it's possible to judge
about a computer program if it can do such things. In case of Mindforth no
such feature is available, and it was never designed for this purpose. In
contrast, Mindforth asks to user to extend the routines ... and so on.

Perhaps it make sense to generalize the problem a bit. A narrow AI is
a computer program which fulfills a task defined before. For example,
the challenge is to translate a sentence from Russia into English and
the Narrow AI has to fulfill the task. Mindforth is not a narrow AI,
but it is equal to the domain definition.

It's important to explain that it's not very complicated to program a
software which creates an action space. All what the programmer has to do
is to imagine a game and formulate it as a computer program. All games
are asking the user to do certain actions. And the game provides the
rules which forces the user to act inside the game. The problem is, that
Artificial Intelligence invented with the aim to solve a game. Because
this is what humans are doing. They are providing the answer for a
problem. To judge if a certain type of software is capable of doing so,
we have to investigate if the software is able to play not it's own game,
but external games.

none albert

unread,
Oct 7, 2019, 4:12:06 AM10/7/19
to
In article <356e48c5-f846-4da3...@googlegroups.com>,
You really explain this very well. It means that Mindforth is not
going anywhere, if that wasn't yet clear after 20 years.

Groetjes Albert
--
This is the first day of the end of your life.
It may not kill you, but it does make your weaker.
If you can't beat them, too bad.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Manuel Rodriguez

unread,
Oct 7, 2019, 5:28:02 AM10/7/19
to
Am Montag, 7. Oktober 2019 10:12:06 UTC+2 schrieb none albert:
> It means that Mindforth is not
> going anywhere, if that wasn't yet clear after 20 years.


I'm not sure, if it's enough to criticize only the Mindforth project,
but Strong AI in general. Apart from Mindforth, many cognitive simulators
were mentioned in the literature which are not grounded in reality but
formulating problems by it's own. The general idea is to blur the line
between an environment and an agent which acts inside the simulation.

Or let me ask different: What is the intelligence quotient of a Tetris
simulator? This kind of question sounds a bit unusual, because in the
normal case the question is, how well a user plays the Tetris game. Asking
for the IQ score of a simulator sounds like a Zen-kōan, isn't it?

none albert

unread,
Oct 7, 2019, 5:57:54 AM10/7/19
to
In article <96b1f3b2-e78d-44dc...@googlegroups.com>,
I'm studying Stanley Greenspan
"The Growth of the Mind and the Endangered Origin of Intelligence"
It explains that intelligence develops from a very primitive core
and that motivation (hunger, craving affection) plays an important
role there. Failing that leads to autism.
A Tetris game is severely autistic.
I'm convinced that an intelligence has to develop, not programmed.

Piaget has also much to say about the subject.

You are right about playing games. A good Turing test is:
"lets play a game"
"it is called chess"
"let me explain the rules"
If after explaining the rules it plays a half decent chess
games, or even play by the rules, we are getting somewhere.

In 2000 I started (inspired to some extent by mentifex) my
computer intelligence Forth to explore those idea's.
I have been working on tools since, and did not push the
subject further, partly because of lack of clear idea's.
0 new messages