The central executive

27 views
Skip to first unread message

John F Sowa

unread,
Apr 10, 2024, 2:07:49 PMApr 10
to ontolo...@googlegroups.com, Peirce List, CG
In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid' of artificial neural networks (ANNs) with symbols.  Hybrid's imply two (sometimes more) distinctly different things.  But all the processes in the mind and brain are integrated, and they all operate continuously in different parts of the brain, which are all monitored and controlled by a central executive.  For AI, integration is the goal, and a hybrid stage is something that needs to be replaced with a tighter integration.  I believe that the final document should emphasize the dangers that  Gary Marcus and I discussed in March.

And for that matter, artificial neural networks are not new.  William James suggested the telephone network as a model, and more detailed mathematical models were developed in the 1940s.  In fact, Marvin Minsky, one of the founders of AI, wrote his PhD thesis at Princeton on a mathematical model of neural networks in the early 1950s. 

Research in the cognitive sciences involves a collaboration of all the sciences that study any and every aspect of cognition:  philosophy, psychology, logic, artificial intelligence, neuroscience, and anthropology.  As an overview of the methods of integration, I attach a copy (Section 7) of an article that is in press:  Phaneroscopy:  The Science of Diagrams.

In that section, I show how the theories of C. S. Peirce and recent developments in the cognitive sciences support, illustrate, and explain the issues.  In particular, they go far beyond just a hybrid of two approaches.  They treat the brain and the mind it supports as an integrated system.  The key to integration is a central executive, located in the frontal lobes that relates and controls every component in the cerebral cortex, the cerebellum, and the brain stem.

Note the loop with a photo of Peirce standing in the center.  It shows how the four steps of abduction, deduction, observation, and induction work together.    Every iteration -- from milliseconds to hours to days -- involves guessing, reasoning, observing, and learning.   These processes are not separated.   They operate continuously.  

For details, see Section 7, which also contains several references to articles and slides with more detail.  I also recommend The Central Executive Network (CEN):  https://www.o8t.com/blog/central-executive-network#:~:text=Since%20its%20initial%20discovery%20in,middle%20and%20inferior%20temporal%20gyri 

John
 
Section7.pdf

doug foxvog

unread,
Apr 10, 2024, 4:04:37 PMApr 10
to ontolo...@googlegroups.com
On Wed, April 10, 2024 14:07, John F Sowa wrote:
> In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid' of
> artificial neural networks (ANNs) with symbols. Hybrid's (sic) imply two
> (sometimes more) distinctly different things. But all the processes in
> the mind and brain are integrated, and they all operate continuously in
> different parts of the brain, which are all monitored and controlled by a
> central executive. ...

This seems to me to be modeling the body as a machine and not an accurate
description.

There are a wide variety of processes in the mind and brain -- many
processes in the brain occur independently without being integrated either
with each other or with the mind. I am excluding standard cellular level
processes that go on in every cell and the processes of the circulatory
system in the brain. Every neuron regularly chemically interacts with
adjacent neurons & passes electrical signals along its surface.

As far as i understand, much that goes on in the brain we are unaware of,
neurohormone production, for example. Sensory input processing does not
seem to be integrated with a number of other processes. I have seen no
evidence of a central executive in the brain that monitors and controls
all the other processes. I'm not sure how such a central executive could
have evolved.

> John


John F Sowa

unread,
Apr 10, 2024, 6:39:12 PMApr 10
to ontolo...@googlegroups.com, Peirce List, CG
Doug,

The central executive was proposed by the neuroscientists Baddeley & Hitch, not by AI researchers.  There is nothing "machine-like" in the idea, by itself.   Without something like it, there is no way to explain how a huge tangle of neurons could act together and coordinate their efforts to support a common effort.

It reminds me of a neighboring town (to my residence in Croton on Hudson, NY), which was doing some major developments without hiring a general contractor.  They thought that their local town employees could schedule all the processes.  It turned out to be a total disaster.  All the subcontractors did their tasks in a random order, each one interfering with some of the others, and causing a major mess.  There were lawsuits back and forth, and the town management was found  guilty and had losses that were many times greater than the cost of hiring a general contractor.

It is certainly true that there is a huge amount of computation going on in the brain that is below conscious awareness.  Most of that is done by the cerebellum (little brain), which is physically much smaller than the cerebral cortex.  But it contains over four times the number of neurons.  In effect, the cerebellum behaves like a GPU (Graphics Processing Unit) which is a superfast, highly specialized processor for all the perception and action that takes place without conscious awareness. 

For example, when you're walking down the street talking on your cell phone, the cerebellum is monitoring your vision, muscles, and strides -- until you step off the curb and get run over by a bus. That's why you need a central controller to monitor and coordinate all the processes.

Sharks and dolphins are about the same size and they eat the same kind of prey.  Sharks have a huge cerebellum and a small lump for a cerebellum.   Dolphins have a huge cerebral cortex and a huge cerebellum.  They are as agile as sharks, but they can plan, communicate, and coordinate their activities.  When the food is plentiful, they can both eat their fill.  But when it's scarce, the dolphins are much more successful.

Please look at the citations in my previous note and the attached Section7.pdf.   The cycle of abduction, induction, testing, and induction depends on a central executive that is responsible for planning, coordinating, and integrating those steps of conscious feeling, thinking, reasoning, and acting.   With a central executive, an AI system would be more intelligent.  But much, much more R & D would be required before anything could be called "Artificial General Intelligence"  (AGI).  That's why I have very little faith in anything called AGI.

John
 


From: "doug foxvog" <do...@foxvog.org>
Subject: Re: [ontolog-forum] The central executive

On Wed, April 10, 2024 14:07, John F Sowa wrote:
> In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid' of
> artificial neural networks (ANNs) with symbols. Hybrids simply relate two

Dima, Alden A. (Fed)

unread,
Apr 10, 2024, 6:51:56 PMApr 10
to ontolo...@googlegroups.com, Peirce List, CG

Hi John,

 

A certain large language model tells me that Alan Baddeley and Graham Hitch were psychologists and not neuroscientists.

 

Alden

 

P. S. Wikipedia says the same thing, so it must be right…

 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4fb66bcbfa3545089754dd611e29f604%40bestweb.net.

doug foxvog

unread,
Apr 10, 2024, 8:02:12 PMApr 10
to ontolo...@googlegroups.com
John,

Baddeley & Hitch's "central executive" (CE) is described as an attentional
controlling system. I have just briefly glanced at it, but it seems that
the point is coordinating and accessing memory through an episodic buffer,
phonological loop, and visio-spatial "sketchpad". The hypothesized CE
deals with information, language, memory, imagery, & spatial awareness.
That covers a lot, and i assume it would also cover conscious actions and
processes.

But i don't see it covering neurohormone production or things like
heartrate. Lower level processes like basal signaling between neurons
would have no need of a central executive, as they are just basal
processes.

It's the word "all" in "all processes" that indicates to me that the claim
is excessive.

FWIW, i note that sharks also have brains -- as do "higher" orders of
invertebrates.

-- doug f
> ----------------------------------------

John F Sowa

unread,
Apr 10, 2024, 8:44:17 PMApr 10
to ontolo...@googlegroups.com, Peirce List, CG
Doug,

The central executive controls all the processes that are controllable by the human ego.  But the term 'executive' should be considered the equivalent of what the chief executive officer (CEO) of a business does in managing a corporation.  There are intermediaries at various points.

Baddeley & Hitch wrote their initial article in 1974.  They wrote that in response to George Miller's "Magic Number 7, plus or minus 2."  They realized that there was much more to short-term memory than just words and phonemes.  They called Miller's storage "the phonological loop" and they added a visuo-spatial scratchpad for short-term imagery and feelings.  And they continued to revise and extend their hypotheses for another 20 or 30 years.   Other neuroscientists, who are specialists in different aspects, have been working on related issues.

The idea is an important one that the Generative AI gang has not yet latched onto.  But some AI people are starting to take notice, and I believe that they are on the right track.  In summary, there is more to come.  See the references I cited, and do whatever googling and searching you like.

John
 


From: "doug foxvog" <do...@foxvog.org>

John F Sowa

unread,
Apr 10, 2024, 8:56:45 PMApr 10
to ontolo...@googlegroups.com, Peirce List, CG
Dima,

Yes, they were in the same field as George Miller (psychology).  But they also hung out with enough neuroscientists that some of the blood and guts rubbed off on them.   Right now, the major research on the topic depends on neuroscience.

That is one among many reasons why I prefer to use the term 'Cognitive Science'.  The subject is so complex that collaboration among the different  fields is essential.

John
 


From: "Dima, Alden A. (Fed)' via ontolog-forum" <ontolo...@googlegroups.com>

Dr. Lars Ludwig

unread,
May 5, 2024, 9:52:40 AM (9 days ago) May 5
to ontolo...@googlegroups.com, John F Sowa, Peirce List, CG
Doug, John, 
 
I am just reading this catching up: I think it is noteworthy that in modern (autopoietic) system theory (Humberto Maturana, esp. Niklas Luhmann) any (not only societal) systems basically operate and evolve without a central executive. Systemic intelligence is thus independent of any central control instance, which is sometimes understood as a weakness of modern societies. The memory system as the central conscious reproductive (intelligence) system of humans is also not centrally controlled in any meaningful way I could think of (I have written about/explained  the (functioning of the) memory sytem and its central importance for any technology in my thesis on "extended artificial memory", which is basically a general autopoietic theory of all memory sub-systems). Thus, theoretically, I don't yet get John's point. I guess these are relicts of pre-systemic sequential/hierarchical operational thinking (that is classic information science) not yet touched by the pradoxical problem of closed cycles of (control /) system operations.  
 
Lars   
John F Sowa <so...@bestweb.net> hat am 11.04.2024 02:44 CEST geschrieben:

John F Sowa

unread,
May 5, 2024, 3:23:59 PM (9 days ago) May 5
to Dr. Lars Ludwig, ontolo...@googlegroups.com, Peirce List, CG
Lars, Doug, List,

There is a huge difference between a reasoning system and a decision system.  Give a set of axioms and raw data, a reasoning system derives conclusions.  It does not make any value judgments about the any of them,  And it does not take any actions based on any conclusions.

But every living system from bacteria on up must make decisions about which of many sources of information must be considered in taking action.  I agreed with  Mihai Nadin that the sources of knowledge are distributed among all components of the brain, but I should have added "brain and body".  Every part of the body generates signals of pain and pleasure of varying strength.  And the most brilliant or pleasurable thoughts must be deferred when a pain signal from a finger touches a hot stove.

In any animal, there are an immense number of signals coming from every part of the brain and body.   There must be something that decides which one(s) to consider immediately and which ones may be deferred. 

The central executive is not my idea.  But I have done a fair amount of studying of all the branches of the cognitive sciences, and I have learned important ideas from comparing different ways they deal with common problems.

I'm not asking anybody to believe me.  But I am asking everybody to consider the wide range of insights that come from the different branches of all six:  Philosophy, psychology, linguistics, artificial intelligence, neuroscience, and anthropology.  Please look at the references.  And if you don't like the references I cited, look for more.

As for the central executive, please let me know of any other mechanism that can decide whether it's better to (a) read a book, (b) take a nap, (c) eat lunch, or (d) duck and cover.

John
 


From: "Dr. Lars Ludwig" <ma...@lars-ludwig.com>

Dr. Lars Ludwig

unread,
May 5, 2024, 5:19:15 PM (9 days ago) May 5
to ontolo...@googlegroups.com, John F Sowa, Peirce List, CG
John, 
 
if I remember correctly than what you propose here via a central executive was rejected in the cognitive sciences as the socalled "homunculus theory of cognition", meaning, in short, that the "decision making" of a system cannot be explained by an instance (central executive) making decisions.
 
Lars   
John F Sowa <so...@bestweb.net> hat am 05.05.2024 21:23 CEST geschrieben:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
May 5, 2024, 9:52:25 PM (9 days ago) May 5
to Dr. Lars Ludwig, ontolo...@googlegroups.com, Peirce List, CG
Lars, List,

The Homunculus is a totally different concept proposed by philosophers.  It has no relationship to anything that the psychologists and neuroscientists have been studying.  The origin is an idea that goes back to the 1960s with George Miller and his hypothesis about short-term memory and the "Magic Number 7, plus or minus 3". 

The psychologists Baddeley & Hitch wrote their initial article in 1974.  They wrote in response to Miller's hypothesis.  They realized that there is much more to short-term memory than just words and phonemes.  They called Miller's storage "the phonological loop" and they added a "visuo-spatial scratchpad" for short-term memory of imagery and feelings.  And they continued to revise and extend their research for another 20 or 30 years.   Neuroscientists, who are specialists in different aspects,  have been working on related issues.  The consensus is not a single hypothesis, but a branch of research on issues related to conscious control of action by a central executive in the frontal lobes vs. subconscious control by the brainstem and the cerebellum.  

For example, when you're walking down the street and talking on your cell phone, several different systems are controlling your actions:  (1) the central executive is in charge of what you're doing on the phone in talking and pushing buttons; (2) the cerebellum is guiding your steps in walking and maintaining your balance; (3) the brain stem is maintaining your breathing, heart beat, and other bodily functions; and (4) the nerves running done the spine and branching to all parts of your body are controlling every movement and monitoring any abnormalities, such as a burn, a scratch, or a more serious injury.

In Freud's terms. the central executive is the ego, and the lower-level systems are the id.  Those ideas are much older, but they illustrate the kinds of issues involved.  The more recent research relates the observational data to actual neural functions in specific regions of the brain.  Since aspects of those functions can be traced back to the earliest bacteria, worms, and fish, there must be something fundamental about them.  AI systems that do not support related functions do so at their peril.

In my notes and the articles I cite, there are many references to ongoing research.  For more background, don't use those GPT-based things that summarize surface-level trivia.  You can start with Wikipedia, which cites the original research.  Then continue with more detailed studies in neuroscience.

John
 


From: "Dr. Lars Ludwig" <ma...@lars-ludwig.com>

John, 
 
if I remember correctly that what you propose here via a central executive was rejected in the cognitive sciences as the so called "homunculus theory of cognition", meaning, in short, that the "decision making" of a system cannot be explained by an instance (central executive) making decisions.
 
Lars   
John F Sowa <so...@bestweb.net> hat am 05.05.2024 21:23 CEST geschriebe
 
 

Ravi Sharma

unread,
May 7, 2024, 4:06:57 PM (7 days ago) May 7
to ontolo...@googlegroups.com, Dr. Lars Ludwig, Peirce List, CG
John
As you already probably know, in the Indian systems, there is a tremendous amount of literature on Mind, Brain,intellect,  applying reasoning, state of alertness and cognition levels, attention span and like.
I am a listener to many of these dialogs embedded in the knowledge system, and have studied brain lobes size enhancements through recitations and repetitions over years.
Are there fMRI or PET studies that confirm the role of Central Executive relating to:
  • Level of cognition and awareness,
  • Decision making and outcomes or response handling,
  • etc.?
Regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Gary Berg-Cross

unread,
May 8, 2024, 2:15:20 PM (6 days ago) May 8
to ontolo...@googlegroups.com, Dr. Lars Ludwig, Peirce List, CG
As I mentioned in today's Ontolog Summit meeting on ethics for Modern AI systems, It might be more useful to talk about executive function (EF) than an executive.
You can see a good summary argument in this article:  

A new era for executive function research: On the transition from centralized to distributed executive functioning


This follows some ideas people may remember from Minsky's Society of Mind and modular ideas of intelligence.  Distributed theories of cognitive abilities conceptualizes  "EFs as emergent consequences of highly distributed brain processes that communicate with a pool of highly connected hub regions, thus precluding the need for a central executive."

There is much more in the article including ideas on testing distributed models and from a risk point of view this on trust based on distributed robustness:   a "key property of a DCS is its robustness to perturbations. In contrast to centralized systems, in which a nonbrain biological systems such as swarm would be vulnerable to the loss of its leading agent, a swarm organized as a DCS has been shown to be robust to degradation (). Similarly, decentralized (i.e. distributed) networks have been shown to be resilient systems which are capable of absorbing large external perturbations without undergoing functional breakdown (). A DCS network organization in the brain may therefore explain how EFs can be preserved to some extent in the face of pathological attack by lesion and substance-related disorders...."

Gary Berg-Cross 
Potomac, MD


John F Sowa

unread,
May 8, 2024, 2:25:57 PM (6 days ago) May 8
to ontolo...@googlegroups.com, CG
In today's ZOOM session, I mentioned the idea of a central executive for an AI system.   As a starting point, imagine something like Siri, Alexa, Cortana -- but with much more smarts.  AI systems, both new and old, can provide a large part of the smarts.  Like humans, they wouldn't be infallible.  But they could support a central executive that would be held responsible in case of errors or problems or disasters.

The critical issue is RESPONSIBILITY.   The central executive would be comparable to the CEO of a corporation.  EVALUATION is essential. The central executive, like the CEO of a corporation, would know how to get any info that may be needed and who could evaluate it against whatever business, legal, factual, or ethical criteria are critical.  

LLMs are very good for finding information in ordinary language.  But they are not good at evaluating that information.  More traditional AI reasoning systems are more accurate and more reliable.  The central executive must have both kinds of abilities -- supported by appropriate AI assistants.  

I attached Section7 of an article I recently finished.  It explains some background from psychology, neuroscience, and computer systems.   See especially Figures 18, 19, and 20.  

Figure 18 represents human reasoning.   Figure 19 shows how an AI central executive could play the role of a human. and Figurer 20 shows a similar "OODA" loop that has been used to analyze and solve “wicked” engineering problems, which involve “complex interdependences" between the systems and incomplete, inconsistent, information about the problems.

And see some excerpts below from an earlier note I sent to Ontolog Forum. 

John
____________________________________
 
Sent: 5/5/24 9:52 PM

The psychologists Baddeley & Hitch wrote their initial article in 1974.  They wrote in response to George Miller's hypothesis about the "Magic number 7, plus or minus 3".  They realized that there is much more to short-term memory than just words and phonemes.  They called Miller's storage "the phonological loop" and they added a "visuo-spatial scratchpad" for short-term memory of imagery and feelings.  And they continued to revise and extend their research for another 20 or 30 years.
Section7.pdf

John F Sowa

unread,
May 8, 2024, 2:54:10 PM (6 days ago) May 8
to ontolo...@googlegroups.com, Dr. Lars Ludwig, Peirce List, CG
Gary,

Our notes crossed in the mail.   Thank you for citing that article about executive functions in the brain.  If you notice, they cite Baddeley & Hitch who introduced the idea of a central executive.

And your idea about implementing executive functions in a computer system is very similar (maybe identical) to what I have been proposing.  Implement executive functions along the lines of the article you cited (and the other articles I cited) are the key point.

It's irrelevant whether you call the top-level program THE central executive or whether you say that it implements executive functions.  Except for details of terminology, we are in violent agreement.

John
 


From: "Gary Berg-Cross" <gberg...@gmail.com>
Sent: 5/8/24 2:15 PM
To: ontolo...@googlegroups.com
Cc: "Dr. Lars Ludwig" <ma...@lars-ludwig.com>, Peirce List <peir...@list.iupui.edu>, CG <c...@lists.iccs-conference.org>

Gary Berg-Cross

unread,
May 8, 2024, 4:07:42 PM (6 days ago) May 8
to ontolo...@googlegroups.com, Dr. Lars Ludwig, Peirce List, CG
John,

As usual we largely agree.  I would suggest one distinction that might be worth making in that we should not assume that we can build one thing called the executive rather that it will emerge from a series of executive functions which may cooperate or be in competition but which eventually reach some type of decision, conclusion or action.
This idea may guide not only model development but also implementations to test the hypothesis of a distributed executive function.

I would again cite the article for some ideas on some relevant concepts:

"A key challenge is to derive descriptive measures that could provide testable hypothesis of the organizational rules that drive the network towards emergence of EFs. However, one promising approach to investigate EFs from a DCS perspective is the application of methods from network science, primarily graph theory, to neuroimaging data. A DCS conceptualizes executive functioning as an eminent property of multiple interacting elements of the brain. This is analogous to the network science framework, which aims to summarize, via a family of derived metrics, the organizing principles of a set of connected nodes. Hence, the graph theory framework is naturally aligned with a distributed perspective on EFs, with derived metrics potentially capturing the organizing principles or local rules that guide the behavior of the system."

Regards as always,

Gary Berg-Cross 
Potomac, MD

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ravi Sharma

unread,
May 8, 2024, 4:44:42 PM (6 days ago) May 8
to ontolo...@googlegroups.com, Dr. Lars Ludwig, Peirce List, CG
John
Delighted to see Gary and John converge on the Central Exec Idea.
Also delighted with John's pronouncement today was that it is not an empty idea and that his company was using this concept for applications.
Finally if you can, if not already done, look at the way ego (or near to Self notions), intellect, mind, and body are also described in Indian thought, it will go a long way in refining apps using IT. This is my opinion.
Regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member


Michael DeBellis

unread,
May 8, 2024, 4:54:45 PM (6 days ago) May 8
to ontolog-forum
John Sowa said:  "But all the processes in the mind and brain are integrated, and they all operate continuously in different parts of the brain, which are all monitored and controlled by a central executive. "

Sorry, I know I'm being dense but there is still a fundamental question I don't understand the answer to. I assume the idea here is that there is some software system, let's call it E, that supervises what an LLM says and corrects the LLM when it is wrong. So if that is the case it seems to me that the capabilities of E must exceed those of the LLM. I.e., E must be able to understand anything that the LLM can understand and always provide an answer that is at least as good and is sometimes better than the LLM's answer. If that's not the case then we have chaos because E will sometimes correct the LLM even though the LLM's answer was correct. 

So if that's the case why bother using the LLM? Why not just use E?  

Michael

Kingsley Idehen

unread,
May 9, 2024, 11:38:54 AM (5 days ago) May 9
to ontolo...@googlegroups.com

On 5/8/24 4:54 PM, Michael DeBellis wrote:

John Sowa said:  "But all the processes in the mind and brain are integrated, and they all operate continuously in different parts of the brain, which are all monitored and controlled by a central executive. "

Sorry, I know I'm being dense but there is still a fundamental question I don't understand the answer to. I assume the idea here is that there is some software system, let's call it E, that supervises what an LLM says and corrects the LLM when it is wrong. So if that is the case it seems to me that the capabilities of E must exceed those of the LLM. I.e., E must be able to understand anything that the LLM can understand and always provide an answer that is at least as good and is sometimes better than the LLM's answer. If that's not the case then we have chaos because E will sometimes correct the LLM even though the LLM's answer was correct. 

So if that's the case why bother using the LLM? Why not just use E?  

Michael

Hi Michael,

I’ll try, using a GIF, since a picture always speaks a thousand words. Basically, the “executive” is application code that encapsulates the LLM as a functionality module.

opal-overview|690x252

Application Breakdown.

  1. User submits a prompt to the OpenLink Personal Assistant (OPAL), which acts as a protective layer around ChatGPT or other LLMs, such as Mistral.
  2. Through external function integration between OPAL and ChatGPT, a context for interaction is established that drives the prompt completion pipeline, including Knowledge Graph look-ups.
  3. Response is returned to the user, with a notice indicating whether the response was sourced from our knowledge base or inferred by ChatGPT.

Links:

  1. How the OpenLink Personal Assistant App Works – a GIF
  2. How the OpenLink Personal Assistant App Works – Clickable HTML doc

Kingsley


On Wednesday, April 10, 2024 at 11:07:49 AM UTC-7 John F Sowa wrote:
In today's ZOOM meeting, I objected to the term 'neuro-symbolic hybrid' of artificial neural networks (ANNs) with symbols.  Hybrid's imply two (sometimes more) distinctly different things.  But all the processes in the mind and brain are integrated, and they all operate continuously in different parts of the brain, which are all monitored and controlled by a central executive.  For AI, integration is the goal, and a hybrid stage is something that needs to be replaced with a tighter integration.  I believe that the final document should emphasize the dangers that  Gary Marcus and I discussed in March.

And for that matter, artificial neural networks are not new.  William James suggested the telephone network as a model, and more detailed mathematical models were developed in the 1940s.  In fact, Marvin Minsky, one of the founders of AI, wrote his PhD thesis at Princeton on a mathematical model of neural networks in the early 1950s. 

Research in the cognitive sciences involves a collaboration of all the sciences that study any and every aspect of cognition:  philosophy, psychology, logic, artificial intelligence, neuroscience, and anthropology.  As an overview of the methods of integration, I attach a copy (Section 7) of an article that is in press:  Phaneroscopy:  The Science of Diagrams.

In that section, I show how the theories of C. S. Peirce and recent developments in the cognitive sciences support, illustrate, and explain the issues.  In particular, they go far beyond just a hybrid of two approaches.  They treat the brain and the mind it supports as an integrated system.  The key to integration is a central executive, located in the frontal lobes that relates and controls every component in the cerebral cortex, the cerebellum, and the brain stem.

Note the loop with a photo of Peirce standing in the center.  It shows how the four steps of abduction, deduction, observation, and induction work together.    Every iteration -- from milliseconds to hours to days -- involves guessing, reasoning, observing, and learning.   These processes are not separated.   They operate continuously.  

For details, see Section 7, which also contains several references to articles and slides with more detail.  I also recommend The Central Executive Network (CEN):  https://www.o8t.com/blog/central-executive-network#:~:text=Since%20its%20initial%20discovery%20in,middle%20and%20inferior%20temporal%20gyri 

John
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Michael DeBellis

unread,
May 9, 2024, 6:49:48 PM (5 days ago) May 9
to ontolo...@googlegroups.com
Thanks, that makes sense. 

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/l5eZecRvn6U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4c8133d2-6664-4ab3-8ef4-e031bae10fd8%40openlinksw.com.

John F Sowa

unread,
May 9, 2024, 8:06:55 PM (5 days ago) May 9
to ontolo...@googlegroups.com
Michael,

Kingsely's diagram and discussion of OPAL is an example of a system that could implement something like a Central Executive (CE).  The theory about the CE was developed by psychologists and neuroscientists in order to model animals (of every species, including humans).   Studies of the brain show a huge amount of diversity distributed across many different components in the brain and body.

Question:  How can all the distributed diversity enable an animal to behave with a unified personality?  Your dog or cat, for example, behaves as it if had an "ego", a "self". or a "personality" that resembles kinds of personalities that you might find in humans.

A joke with some resemblance to reality:   A dog behaves like a small man in a cheap fur coat.  A cat behaves like a tiny woman in a cheap fur coat.

Question:  How can you design an AI system that behaves like a human in a cheap metal suit?

Answer:  Design something along the lines of the Section7.pdf attachment in my previous note.

For examples of how you might begin the design, I would say that Kingsley's outline could be used as a top-level design.   For details about the components, see my Section7.pdf plus the many references at the bottom of that file.

John

Dr. Lars Ludwig

unread,
May 10, 2024, 9:08:19 AM (4 days ago) May 10
to so...@bestweb.net, ontolo...@googlegroups.com, Peirce List, CG
John, 
 
The first wave of cognitive scientists from the 60s and 70s (esp. from the US) used concepts from information science in order to explain the workings of the brain (maybe that's the reason you find a liking in this). The second wave (inspired by progress in neuro science) rejected these simplictic models by pointing to the tautological quality of such explanations (aka Homunculus models). The ideas of a central executive in the brain are therfore an example of an outdated (rather weak) explanation pattern. Someone in the list pointed out that it would be better to use "executive functions" and think of those as manyfold and distributed. That's one way. More modern theories of cognition (see Wolfgang Prinz) link action (something executive) closely to perception, which hints into the oppositive direction. Thus, as a cognitive psychologist, I would strongly advice to drop this idea of a central executive, as it has no validity in current cognitive sciences. 
 
Lars      
John F Sowa <so...@bestweb.net> hat am 06.05.2024 03:52 CEST geschrieben:

Damion Dooley

unread,
May 10, 2024, 10:27:25 AM (4 days ago) May 10
to ontolog-forum
Just for fun, I had dropped a link during the meeting to https://en.wikipedia.org/wiki/Project_Cybersyn, a 1970's Chilean cybernetic experiment basically modelling central executive concepts as far as I can tell, but with telex machines as nerves.  How far our thinking might have come along had a military dictatorship not snuffed this prototype out!

Gary Berg-Cross

unread,
May 10, 2024, 5:05:50 PM (4 days ago) May 10
to ontolo...@googlegroups.com, so...@bestweb.net, Dr. Lars Ludwig, Peirce List, CG

Lars,





 I was the one who talked about the concept of executive function rather than an executive and I guess that makes sense since I'm also a cognitive psychologist.

As you say, the more recent approach tunderstande executive functions is as a distributed process. This is also discussed as executive control and the evidence from studying brain network activities
(or damage to the PFC etc.) suggests that it is  not implemented by one individual network, but rather by dynamic interactions among several large-scale neural networks including the fronto-parietal central executive network (CEN), cingulo-opercular salience network (SN), and the very important inhibiting medial prefrontal-medial parietal default mode networks (DMN).
This understanding from this along with related cognitive models of operation should be helpful in developing executive type control of neuro-symbolic systems. But it is worth noting that artificial systems need not follow the biological model of particular networks with particular functions and the control/cooperative relationships and between them.  There may be many more types of artificially Intelligent systems with various wiftly recursive control and cooperation arrangements available given sufficiently powerful computing and knowledge resources. 

Some of us think that the path to that stage will require quite a bit more research and debate for all this,  but there would be much joy from the understanding of doing such research.


Gary Berg-Cross 
Potomac, MD
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Gary Berg-Cross

unread,
May 11, 2024, 11:05:54 AM (3 days ago) May 11
to ontolo...@googlegroups.com
Michael,

Here's my two cents or at least initial thoughts.  Human cognition includes a significant ability of metacognition.
 However this ability does not mean it knows everything all the time about other cognitions that are going on.  It's just that it has evolved, develops and emerges as a capability from many neural networks in individuals. It has what might be thught of as organizational rules that allows it to operate in a pragmatic, useful way for monitoring and guiding Cognitive activities within the context of what we are experiencing, our beliefs and intentions.

The type of automated un-evolved metacognition you are talking about is more godlike and not likely to be available during our lifetimes, although we can imagine it. We can advance towards that idealized idea in simpler modular steps and along the way at some point we might get close to human executive function abilities.

Thanks for following up your question during the session which didn't get discussed adequately.


Gary Berg-Cross 
Potomac, MD

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
May 11, 2024, 2:24:14 PM (3 days ago) May 11
to ontolo...@googlegroups.com
Gary BC, Michael DB, List,

First, I'll say that I basically agree with Gary, especially with the sentence "Human cognition includes a significant ability of metacognition."

The central executive (CE) is not an all-knowing E.  It basically uses metacognition to interpret comments and questions from the outside, asks its network of assistants to find or generate information, and translates that information to responses for the person who asked.  The network of assistants have a huge amount of knowledge and ability to reason with and about that knowledge.   The CE makes decisions about what information they give it, and takes appropriate actions.

The best comparison is to the CEO of a corporation.  The CEO is neither the most intelligent individual nor the most knowledgeable.  But the CEO has access, via a DISTRIBUTED network of managers and assistants, to everybody in the corporation who may have whatever specialized knowledge is required to address whatever problem or issue must be addressed.

To implement a Central Executive (CE),  just write a top-level program that controls whatever AI system you have designed.  Then implement a more intelligent version of Siri or Alexa that can find and discuss anything that anybody in the corporation knows or does.  The CE does not have that knowledge by itself.  But it can use its network of assistants to find somebody (or something) who does.  It also has access to intelligent assistants who can interpret and combine multiple bits of information from various parts of the network.  And it uses natural language processing (possibly based on or assisted by LLMs) that can formulate the replies.  Then the CE speaks or prints those replies to anybody who asked for the information.

On a related issue, see my next note about Verses AI.

John  
 


From: "Gary Berg-Cross" <gberg...@gmail.com>

Kingsley Idehen

unread,
May 12, 2024, 11:37:17 AM (2 days ago) May 12
to ontolo...@googlegroups.com

On 5/9/24 8:06 PM, John F Sowa wrote:

Michael,

Kingsely's diagram and discussion of OPAL is an example of a system that could implement something like a Central Executive (CE).  The theory about the CE was developed by psychologists and neuroscientists in order to model animals (of every species, including humans).   Studies of the brain show a huge amount of diversity distributed across many different components in the brain and body.

Question:  How can all the distributed diversity enable an animal to behave with a unified personality?  Your dog or cat, for example, behaves as it if had an "ego", a "self". or a "personality" that resembles kinds of personalities that you might find in humans.

A joke with some resemblance to reality:   A dog behaves like a small man in a cheap fur coat.  A cat behaves like a tiny woman in a cheap fur coat.

Question:  How can you design an AI system that behaves like a human in a cheap metal suit?

Answer:  Design something along the lines of the Section7.pdf attachment in my previous note.

For examples of how you might begin the design, I would say that Kingsley's outline could be used as a top-level design.   For details about the components, see my Section7.pdf plus the many references at the bottom of that file.

John

Hi John and other interested parties,

“I am attaching a revised GIF that more accurately illustrates how the OPAL system functions. I’ve included additional steps (from 9 onward) that demonstrate how the conversational nature of the interaction is managed by ChatGPT.

opal-architecture-overview-4|1485x575

A key feature enabling this functionality is the external function integration using callbacks, as provided by the OpenAI completions API. Currently, only Mistral (an open source LLM) offers a similar feature, though it is not yet ready for serious deployment.

Kingsley

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
opal-architecture-overview-4.gif

John F Sowa

unread,
May 12, 2024, 4:05:27 PM (2 days ago) May 12
to ontolo...@googlegroups.com
I agree with Kingsley's one sentence summary:  "Basically, the “executive” is application code that encapsulates the LLM as  a functionality module."

A  longer summary:  The LLMs or any other systems that process information and do whatever the users request are rarely, if ever, the software that carries on a conversation with people.   Therefore, we should design that software as a kind of very intelligent Siri or Alexa that handles all communications with the users and is responsible for finding and accessing all the other software and systems that perform whatever services the users want or need.

As Gary said, the program that interacts with the end users does a kind of metacognition.  It knows ABOUT all the sources of information and the systems that perform actions,.  It handles all communications to, from, and about them.  Imagine, for example, a librarian at your local library or something as big as the Library of Congress.  

John
___________________

Ravi Sharma

unread,
May 12, 2024, 8:36:43 PM (2 days ago) May 12
to ontolo...@googlegroups.com
John (also Kinglsley)
Thanks for the response, and also partly answering the S4 Question for Versa AI.
You are saying that Central Executive on one hand will communicate with LLMs (and hopefully Visual images and languages) and on the other have human interface such as CBR tools of the past? this way they only address what is needed rather than the whole LLM store?
Regards
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Gary Berg-Cross

unread,
May 13, 2024, 10:59:50 AM (yesterday) May 13
to ontolo...@googlegroups.com
To continue this discussion taking some hits from bio-inspiration .
Like others I would guess that artificial intelligence is going to have more diverse Architectures then say primates. But cognitive models of primate/human function can provide some design ideas that may be useful for moving forward. So basic things like metacognition or a belief system or mental models of others and the idea of intentions suggest some capabilities that should be designed in. 
Somehow neural architectures that have evolved over time seem to be able to accomplish this.  I would thus expect to see some components in system designs that allow for these capabilities. And broadly speaking these seem emergent and therefore a simple design principle is "Design a system for emergence". 
 
"An intelligent agent should not be completely designed, but rather should be endowed with the ability to self-direct the exploration of its own sensory-motor capabilities, and with means to escape its limited built-in behavioral repertoire, and to acquire its own history.  After Lungarella 's "


Gary Berg-Cross 
Potomac, MD

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages