consciousness

94 views
Skip to first unread message

Nadin, Mihai

unread,
Oct 1, 2023, 4:59:46 PM10/1/23
to Ontolog Forum

John F Sowa

unread,
Oct 1, 2023, 11:40:20 PM10/1/23
to ontolo...@googlegroups.com, CG, Peirce List
That article shows several points:  (1) The experts on the subject don't agree on basic issues.  (2) They are afraid that too much criticism of one theory will cause neuroscientists to consider all theories dubious.  (3) They don't have \clear criteria for what kinds of observations would or would not be considered relevant to the issues.

But I want to mention some questions I have:   What parts of the brain are relevant for any sensation of consciousness?  All parts? Some parts?  Some parts more than others?  Which ones?

From common experience, we know that complex activities require a great deal of conscious attention when we're first learning them.  But after we learn them, they become almost automatic, and we can perform them without thinking about them.  Examples:  Learning to ski vs. skiing smoothly on moderate hills vs skiing on very steep or complex surfaces.  The same issues apply to any kind of skill:  driving a car, driving a truck, flying a plane, swimming, dancing, skating, mountain climbing, working in any profession of any kind -- indoors, outdoors, on a computer, with any kinds of tools, instruments, conditions, etc.

In every kind of skill, the basic techniques become automatic and can be performed with a minimum of conscious attention.  There is strong evidence that the effort in the cerebrum (/AKA cerebral cortex) is conscious, but expert skills are controlled by the cerebellum, which is not conscious.  There is brief discussion of the cerebellum in Section6.pdf (see the latest excerpt I sent, which is dated 28 Sept 2023).

For more about the role of the cerebellum, see the article and video of a man who was born without a cerebellum and survived:  A Man's Incomplete Brain Reveals Cerebellum's Role In Thought And Emotion.

 


From: "Nadin, Mihai" <na...@utdallas.edu>

Ricardo Sanz

unread,
Oct 2, 2023, 4:16:50 AM10/2/23
to ontolo...@googlegroups.com
A combat for media appearance :-)

R.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/DM8PR01MB6904789D0AC3B60C233322DFDAC6A%40DM8PR01MB6904.prod.exchangelabs.com.


--

UNIVERSIDAD POLITÉCNICA DE MADRID

Ricardo Sanz

Head of Autonomous Systems Laboratory

Escuela Técnica Superior de Ingenieros Industriales

Center for Automation and Robotics

Jose Gutierrez Abascal 2.

28006, Madrid, SPAIN

Ricardo Sanz

unread,
Oct 2, 2023, 4:28:44 AM10/2/23
to ontolo...@googlegroups.com
Hi,

>> What parts of the brain are relevant for any sensation of consciousness? 

So far, the question of neural correlates of consciousness (NCC) is still unresolved. This was the theme of the Chalmers-Koch wager. There are too many theories and no relevant enough experimental data to decide.

The most repeated theory is that consciousness is hosted in thalamo-cortical reentrant loops. The cortex (sensorimotor data processor) and the thalamus (the main relay station of the brain). This is yet to be demonstrated.

Another widely repeated theory was that the NCC was a train of 40hz signal waves across the whole brain.

The boldest to me, however, is the quantum macroscopic coherence in the axon microtubules. This is called the Orchestrated Objective Reduction theory (Orch-OR).

Best,
Ricardo



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ravi Sharma

unread,
Oct 2, 2023, 3:12:42 PM10/2/23
to ontolo...@googlegroups.com
Ricardo, John
We depend on Cognition as essential to most eor example - logic and thoughts especially for reaching concepts in study of ontology?
Where is that Cognition located?
Hard to answer but one of answers could be "Not only in Physical organ Brain alone" but in more than that.
In Sanskrit the entity required for cognition is "Manas" that includes brain but also items such as thoughts and feelings?
Big puzzles.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect



John F Sowa

unread,
Oct 2, 2023, 3:37:37 PM10/2/23
to ontolo...@googlegroups.com, CG, Peirce List
Ricardo, Alex, Anatoly, and anybody who is working with or speculating about LLMs for generative AI,

LLMs have proved to be valuable for machine translation of languages.  They have also been used to implement many kinds of toys that appear to be impressive.    But nobody has shown that LLM technology can be used for any mission critical applications of any kind -- i.e. any applications for which a failure would cause a disaster (financial or human or both).

Question:  Companies that are working on generative AI are *taking* a huge amount of money from investors.  Have any of them produced any practical applications that are actually *making* money?   Generative AI is now at the top of the hype cycle.  That implies an impending crash into the trough of disillusionment.  When will that crash occur?  Unless anybody can demonstrate applications that make money, the investors are going to be disillusioned.

To Ricardo> Those are interesting hypotheses about consciousness in your note below.  But none of them have any significant implications for AI, ontology, or the possibility of money-making applications of LLMs.

One important point:  Nobody suggests that anything in the cerebellum is conscious.  The results from the cerebellum that are reported to the cortex are critical, especially since the cerebellum has more than four times as many neurons as the cerebral cortex.  There is also strong evidence that the cerebellum is essential for complex mathematics.  (See Section 6.pdf)

Implication:  AI methods that simulate processes in the cerebral cortex (such as natural language processing by LLMs) cannot do the heavy duty  computation that is done by neurons in the cerebellum -- and that includes the most complex logic and mathematics.

See the summary in Section6.pdf and my other references below.

John
 


From: "Ricardo Sanz" <ricardo.s...@gmail.com>

Hi,

JFS>> What parts of the brain are relevant for any sensation of consciousness? 

So far, the question of neural correlates of consciousness (NCC) is still unresolved. This was the theme of the Chalmers-Koch wager. There are too many theories and no relevant enough experimental data to decide.

The most repeated theory is that consciousness is hosted in thalamo-cortical reentrant loops. The cortex (sensorimotor data processor) and the thalamus (the main relay station of the brain). This is yet to be demonstrated.

Another widely repeated theory was that the NCC was a train of 40hz signal waves across the whole brain.

The boldest to me, however, is the quantum macroscopic coherence in the axon microtubules. This is called the Orchestrated Objective Reduction theory (Orch-OR).

Best,
Ricardo

Alex Shkotin

unread,
Oct 3, 2023, 3:42:08 AM10/3/23
to ontolo...@googlegroups.com, CG, Peirce List
John, 

I'm researching how LLMs work. And we will really find out where they will be used after the hype in 3-5 years.

Alex

пн, 2 окт. 2023 г. в 22:37, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Dan Brickley

unread,
Oct 3, 2023, 4:23:48 AM10/3/23
to ontolo...@googlegroups.com, CG, Peirce List
Frankly, this is just getting silly. 

The ability of these systems to engage with human-authored text in ways highly sensitive to their content and intent is absolutely stunning. Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance. 

I have watched many of my friends in Semantic Web circles slip into the “I tried it, they confabulate”, “I tried it, it couldn’t solve some logic puzzle”, “I tried it, it’s just an autocomplete parrot” mode of response. I could cry.

 My suggestion for paths out of this polarization is just to urge skeptics to try to suspend disbelief. Spend an hour a day or an hour a week using these things. But don’t waste the hour on adversarial gotcha questions or logic puzzles or treating it as a knowledge oracle that knows everything. Focus on back and forth,  “talking to (or with) it” even while knowing it isn’t really talking as we understand that concept. Just play make believe, go along with the gullible hype-misled masses, pretend you’re talking with it, but in a collaborative rather than gotcha mode. As you do this, please resist the urge to collect its failings as counter evidence (and there will be plenty) or focus on tripping it up to show that Dan has lost his mind. Use a diffhour for that. Instead, imagine you are testing an early WWW browser in 1993. Instead of trying to trick or glitch it, focus on the relative robustness of its responses to your input. Invent some pretext for back and forth that involves it needing to respond sensitively to what you type, eg a tutorial, or a game with multiple choices for next action, or maybe a quiz. In tools like chatgpt you can go back and retry an interaction. Use this to explore how it responds to different formulations of your input. Try to say the same thing but expressed in 100 (one hundred!) different text strings. Imagine you are dyslexic, an immigrant with English as a second language, an interdisciplinary researcher, whatever - just use the full flexibility of natural language expression and note what kinds of variation it seems to handle well or badly. That’s all. I am pretty confident anyone trying this will find themselves on a path from gotcha-ing to understanding that we are in one of those rare moments when the hype (although often preposterous, misleading, exploitative, gullible, stupid or all of the above) is grounded in a historical milestone of the kind many lifetimes never saw.

IMHO etc.,

Dan


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Stephen Young

unread,
Oct 3, 2023, 4:37:52 AM10/3/23
to ontolo...@googlegroups.com, CG, Peirce List
Yup.  My 17yo only managed 94% in his Math exam.  He got 6% wrong.  Hopeless - he'll never amount to anything.

Anatoly Levenchuk

unread,
Oct 3, 2023, 7:20:37 AM10/3/23
to ontolo...@googlegroups.com, CG, Peirce List

John,

When you target LLM and ANN as its engine, you should consider that this is very fast moving target. E.g. consider recent work (and imagine what can be done there in a year or two in graph-of-thoughts architectures):

Boolformer: Symbolic Regression of Logic Functions with Transformers

Stéphane d'Ascoli, Samy Bengio, Josh Susskind, Emmanuel Abbé

In this work, we introduce Boolformer, the first Transformer architecture trained to perform end-to-end symbolic regression of Boolean functions. First, we show that it can predict compact formulas for complex functions which were not seen during training, when provided a clean truth table. Then, we demonstrate its ability to find approximate expressions when provided incomplete and noisy observations. We evaluate the Boolformer on a broad set of real-world binary classification datasets, demonstrating its potential as an interpretable alternative to classic machine learning methods. Finally, we apply it to the widespread task of modelling the dynamics of gene regulatory networks. Using a recent benchmark, we show that Boolformer is competitive with state-of-the art genetic algorithms with a speedup of several orders of magnitude. Our code and models are available publicly.


https://arxiv.org/abs/2309.12207

Best regards,
Anatoly

 

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Oct 3, 2023, 4:54:11 PM10/3/23
to ontolo...@googlegroups.com, CG, Peirce List
Anatoly, Stephen, Dan, Alex, and every subscriber to these lists,

I want to emphasize two points:  (1) I am extremely enthusiastic about LLMs and what they can and cannot do.  (2) I am also extremely enthusiastic about the 60+ years of R & D in AI technologies and what they have and have not done.  Many of the most successful AI developments are no longer called AI because they have become integral components of computer science.  Examples:  compilers, databases, computer graphics, and the interfaces of nearly every appliance we use today:  cars, trucks, airplanes, rockets, telephones, farm equipment,  construction equipment, washing machines, etc.  For those things, the AI technology of the 20th century is performing mission-critical operations with a level of precision and dependability that unaided humans cannot achieve without their help.

Fundamental principle:  For any tool of any kind -- hardware or software -- it's impossible to understand exactly what it can do until the tool is pushed to the limits where it breaks.  At that point, an examination of the pieces shows where its strengths and weaknesses lie.

For LLMs, some of the breaking points have been published as hallucinations and humorous nonsense.  But more R & D is necessary to determine where the boundaries are, how to overcome them, work around them, and supplement them with the 60+ years of other AI tools.

Anatoly>  When you target LLM and ANN as its engine, you should consider that this is very fast moving target. E.g. consider recent work (and imagine what can be done there in a year or two in graph-of-thoughts architectures) . . .

Yes, that's obvious.  The article you cited looks interesting, and there are many others.  They are certainly worth exploring.  But I emphasize the question I asked:    Google and OpenAI have been exploring this technology for quite a few years.   What mission-critical applications have they or anybody else discovered and implemented?

So far the only truly successful applications are in MT -- machine translation of languages, natural and artificial.   Can anybody point to any other applications that are mission critical for any business or government organization anywhere?

Stephen Young>  Yup.  My 17yo only managed 94% in his Math exam.  He got 6% wrong.  Hopeless - he'll never amount to anything. 

The LLMs have been successful in passing various tests at levels that match or surpass the best humans.  But that's because they cheat.  They have access to a huge amount of information on the WWW about a huge range of tests.  Bur when they are asked routine questions for which the answers or the methods for generating answers cannot be found, they make truly stupid mistakes. 

No mission-critical system that guides a car, an airplane, a rocket, or a farmer's plow can depend on such tools. 

Dan Brickley> Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance. 

Yes, I enthusiastically agree.   We must always ask questions.  We must study how LLMs work, what they do, and what their limitations are.   If they cannot solve some puzzle, it's essential to find out why.     Noticing a failure on one problem is not an excuse for giving up.  It's a clue for guiding the search.

Alex>  I'm researching how LLMs work.  And we will really find out where they will be used after the hype in 3-5 years.

Yes.  But that is when everybody else will have won the big contracts to develop the mission-critical applications. 

Now is the time to do the critical research on where the strengths and limitations are.  Right now, the crowd is having fun building toys that exploit the obvious strengths.  The people who are doing the truly fundamental research are exploring the limitations and how to get around them.

John

Alex Shkotin

unread,
Oct 4, 2023, 4:38:44 AM10/4/23
to ontolo...@googlegroups.com, CG, Peirce List

John,


For me LLM is a different technology compared to formal ontologies. Completely different.

JFS: "But that is when everybody else will have won the big contracts to develop the mission-critical applications."

I am more interested in the use of formal ontologies in mission-critical applications.

JFS: "Now is the time to do the critical research on where the strengths and limitations are."

There are such reviews. Here's the first one I came across https://indatalabs.com/blog/large-language-model-apps


Alex



вт, 3 окт. 2023 г. в 23:54, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Oct 4, 2023, 5:17:54 AM10/4/23
to ontolo...@googlegroups.com, CG, Peirce List
IN ADDITION:
This is the picture of the domain of critical research:
image.png

Alex

ср, 4 окт. 2023 г. в 11:38, Alex Shkotin <alex.s...@gmail.com>:

Kingsley Idehen

unread,
Oct 4, 2023, 11:07:55 AM10/4/23
to ontolo...@googlegroups.com, John F Sowa, CG, Peirce List

Hi John,

On 10/3/23 4:53 PM, John F Sowa wrote:
Anatoly, Stephen, Dan, Alex, and every subscriber to these lists,

I want to emphasize two points:  (1) I am extremely enthusiastic about LLMs and what they can and cannot do.  (2) I am also extremely enthusiastic about the 60+ years of R & D in AI technologies and what they have and have not done.  Many of the most successful AI developments are no longer called AI because they have become integral components of computer science.  Examples:  compilers, databases, computer graphics, and the interfaces of nearly every appliance we use today:  cars, trucks, airplanes, rockets, telephones, farm equipment,  construction equipment, washing machines, etc.  For those things, the AI technology of the 20th century is performing mission-critical operations with a level of precision and dependability that unaided humans cannot achieve without their help.

Fundamental principle:  For any tool of any kind -- hardware or software -- it's impossible to understand exactly what it can do until the tool is pushed to the limits where it breaks.  At that point, an examination of the pieces shows where its strengths and weaknesses lie.

For LLMs, some of the breaking points have been published as hallucinations and humorous nonsense.  But more R & D is necessary to determine where the boundaries are, how to overcome them, work around them, and supplement them with the 60+ years of other AI tools.


I think it is safe to conclude that a natural language processor and code generator can't also function as an all-answering oracle without integration with domain-specific Knowledge Bases, which is why Retrieval Augmented Generation (RAG) has become an emergent area of activity in recent times.


RAG is basically about loosely coupling LLMs and Knowledge Bases (including Knowledge Graphs), which is an area I've been experimenting with for some time now [1].



Anatoly>  When you target LLM and ANN as its engine, you should consider that this is very fast moving target. E.g. consider recent work (and imagine what can be done there in a year or two in graph-of-thoughts architectures) . . .

Yes, that's obvious.  The article you cited looks interesting, and there are many others.  They are certainly worth exploring.  But I emphasize the question I asked:    Google and OpenAI have been exploring this technology for quite a few years.   What mission-critical applications have they or anybody else discovered and implemented?

So far the only truly successful applications are in MT -- machine translation of languages, natural and artificial.   Can anybody point to any other applications that are mission critical for any business or government organization anywhere?


Yes, software help and support. It is now possible to build assistants (or co-pilots) that fill voids that have challenged software usage for years [2][3] i.e., conversational self-support and help as integral parts of applications.



Stephen Young>  Yup.  My 17yo only managed 94% in his Math exam.  He got 6% wrong.  Hopeless - he'll never amount to anything. 

The LLMs have been successful in passing various tests at levels that match or surpass the best humans.  But that's because they cheat.  They have access to a huge amount of information on the WWW about a huge range of tests.  Bur when they are asked routine questions for which the answers or the methods for generating answers cannot be found, they make truly stupid mistakes. 

No mission-critical system that guides a car, an airplane, a rocket, or a farmer's plow can depend on such tools.


True, but there are many others areas of utility that require less precision -- as per my comments above.



Dan Brickley> Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance. 

Yes, I enthusiastically agree.   We must always ask questions.  We must study how LLMs work, what they do, and what their limitations are.   If they cannot solve some puzzle, it's essential to find out why.     Noticing a failure on one problem is not an excuse for giving up.  It's a clue for guiding the search.

Alex>  I'm researching how LLMs work.  And we will really find out where they will be used after the hype in 3-5 years.

Yes.  But that is when everybody else will have won the big contracts to develop the mission-critical applications. 

Now is the time to do the critical research on where the strengths and limitations are.  Right now, the crowd is having fun building toys that exploit the obvious strengths.  The people who are doing the truly fundamental research are exploring the limitations and how to get around them.


Yes, sandboxing LLMs can mitigate the adverse effects of hallucinations. OpenAI, in particular, offers integration points that facilitate this, such as support for external function integration using callbacks, among other features.



John


Links:

[1] https://medium.com/virtuoso-blog/chatgpt-and-semantic-web-symbiosis-1fd89df1db35 -- ChatGPT and Semantic Web Symbiosis

[2] https://www.linkedin.com/pulse/leveraging-llm-based-conversational-assistants-bots-enhanced-idehen/

[3] netid-qa.openlinksw.com:8443/chat/?chat_id=746cbe10c60e9b2544211adf071c714e -- Assistant Transcript & Demo

-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Alex Shkotin

unread,
Oct 5, 2023, 11:31:55 AM10/5/23
to ontolo...@googlegroups.com, CG, Peirce List

John and all


I’ve probably already written that I regularly read Sergei Karelov’s reviews on Facebook on LLM and other AI technologies. Here is a rather long quote from his post today (0):

“The study by Max Tegmark’s group at MIT “Language models represent space and time” (1) provided evidence that large language models (LLMs) are not just machine learning systems on huge collections of superficial statistical data. LLMs build holistic models of a process within themselves data generation - models of the world.

The authors present evidence of the following:

• LLMs are trained in linear representations of space and time at different scales;

• These representations are robust to variations in prompts and are unified across different types of objects (for example, cities and landmarks).

In addition, the authors identified separate “space neurons” and “time neurons” that reliably encode spatial and temporal coordinates.

The analysis presented by the authors shows that modern LLMs are acquiring structured knowledge about fundamental dimensions such as space and time, which supports the view that LLMs are learning literal models of the world rather than just superficial statistics.

Those wishing to check the results of the study and the conclusions of the authors here (2) (the open source model is available for any verification)." <translated by Google-trans>


This is certainly not a critical study, but a research.


Alex


0 https://www.facebook.com/sergey.karelov.5/posts/pfbid0QuKtWKtJV1yPLnjk1gcBcsjPBpCkjhDC9PcfrV5KeMvix75MsbH3nPmt1NE6u8uPl 

1 https://arxiv.org/abs/2310.02207

2 https://github.com/wesg52/world-models


вт, 3 окт. 2023 г. в 23:54, John F Sowa <so...@bestweb.net>:
Anatoly, Stephen, Dan, Alex, and every subscriber to these lists,

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Oct 5, 2023, 5:22:48 PM10/5/23
to ontolo...@googlegroups.com
Alex,

As I have said many times.  LLMs have shown a great deal of potential for performing some very important kinds of AI techniques.  I strongly encourage good solid research on the foundations and potential applications of those systems.

Alex>  This is certainly not a critical study, but research.

Any so-called research that is not critical is like picking up pretty stones on a beach and claiming to be doing archaeology.

Alex> In addition, the authors identified separate “space neurons” and “time neurons” that reliably encode spatial and temporal coordinates. 

That claim shows that those so-called "researchers" are incompetent.    The source data from the WWW has an immense number of items that encode locations and times.  Since LLMs were designed for machine translation, they would certainly find some way of relating time and space encodings to one another in order to do translations

Those "researchers" who found such encodings and called them "neurons" were incredibly naive or just plain stupid.   But any investor who gives them money is even stupider.

I repeat my point:  Unless and until those hackers produce mission critical software to solve some investor's mission critical problems, they are producing bull-turds and calling them flowers.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John F Sowa

unread,
Oct 5, 2023, 6:02:54 PM10/5/23
to ontolo...@googlegroups.com
For anybody who thinks that i am being too harsh on those poor researchers who are doing their best to understand what LLMs are doing, please note the following excerpt from their publication.  Note that they are from MIT (a place where I happened to study AI many years ago.).  But in those days, we could tell our customers exactly what we were doing with their critical data.

Those researchers also cite research from neuroscience about studying where and how the human brain encodes space and time information.  That is legitimate research, because no human has any idea about the details of how the human brain works.

But if AI people design a system that is as opaque as the human brain, those of us who read their research are justified in raising serious doubts about their competence.

John

-----------------------------------------------
LANGUAGE MODELS REPRESENT SPACE AND TIME, Wes Gurnee & Max Tegmark Massachusetts Institute of Technology 

7 DISCUSSION We have provided evidence that LLMs learn linear representations of space and time that are unified across entity types and fairly robust to prompting, and that there exists individual neurons that are highly sensitive to these features. The corollary is that next token prediction alone is sufficient for learning a literal map of the world give sufficient model and data size. Our analysis raises many interesting questions for future work. While we showed that it is possible to linearly reconstruct a sample’s absolute position in space or time, and that some neurons use these probe directions, the true extent and structure of spatial and temporal representations remain unclear. In particular, we conjecture that the most canonical form of this structure is a discretized hierarchical mesh, where any sample is represented as a linear combination of its nearest basis points at each level of granularity. Moreover, the model can and does use this coordinate system to represent absolute position using the correct linear combination of basis directions in the same way a linear probe would. We expect that as models scale, this mesh is enhanced with more basis points, more scales of granularity (e.g. neighborhoods in cities), and more accurate mapping of entities to model coordinates (Michaud et al., 2023). This suggests future work on extracting representations in the model’s coordinate system rather than trying to reconstruct human interpretable coordinates, perhaps with sparse autoencoders (Cunningham et al., 2023). Another confounder in our analysis, and factual recall research more broadly, is the existence of many entities in our dataset which the model is unaware of, contaminating our activation datasets. We would be interested in methods that can identify when a model recognizes a particular entity beyond simply prompting for specific facts and risking hallucinations. We also barely scratched the surface of understanding how these spatial and temporal world models are learned, recalled, or used internally. By looking across training checkpoints, it may be possible to localize a point in training when a model organizes constituent is in place X features into a coherent geometry or else conclude this process is gradual (Liu et al., 2021). We expect that the model components which construct these representations are similar or identical to those for factual recall (Meng et al., 2022a; Geva et al., 2023). In preliminary experiments, we found our models had trouble answering basic spatial and temporal relations questions without relying on multi-step reasoning, complicating any causal intervention analysis Wang et al. (2022), but think that this is the natural next step for understanding when and how these features are used. 

Stephen Young

unread,
Oct 5, 2023, 6:25:58 PM10/5/23
to ontolo...@googlegroups.com
> But if AI people design a system that is as opaque as the human brain, those of us who read their research
> are justified in raising serious doubts about their competence.

??  We should look at this early work in Generative AI as science, rather than engineering - and science largely *is* the study of opaque artificial constructions.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


--
Stephen Young


Alex Shkotin

unread,
Oct 6, 2023, 3:37:12 AM10/6/23
to ontolo...@googlegroups.com
Wow!

пт, 6 окт. 2023 г. в 00:22, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Oct 6, 2023, 3:49:09 AM10/6/23
to ontolo...@googlegroups.com
Stephen,

Exactly! 

Alex

пт, 6 окт. 2023 г. в 01:25, Stephen Young <st...@electricmint.com>:

alex.shkotin

unread,
Oct 6, 2023, 9:02:00 AM10/6/23
to ontolog-forum
IN ADDITION: It's important that this is an open source project like this: 
"Official code repository for the paper "Language Models Represent Space and Time" by Wes Gurnee and Max Tegmark.

This repository contains all experimental infrastructure for the paper. We expect most users to just be interested in the cleaned data CSVs containing entity names and relevant metadata. These can be found in data/entity_datasets/ (with the tokenized versions for Llama and Pythia models available in the data/prompt_datasets/ folder for each prompt type).

In the coming weeks we will release a minimal version of the code to run basic probing experiments on our datasets."

Alex

пятница, 6 октября 2023 г. в 10:49:09 UTC+3, alex.shkotin:

John F Sowa

unread,
Oct 6, 2023, 4:46:22 PM10/6/23
to ontolo...@googlegroups.com
Alex and Stephen,

As I keep saying, I enthusiastically support the potential of the LLM technology, and the methods of using them for solving important problems.  Just look at the history of science:  the greatest scientific breakthroughs are always the result of asking critical questions.  And the proposed answers are always subjected to even more difficult questions.  

Stephen Young>  We should look at this early work in Generative AI as science, rather than engineering - and science largely *is* the study of opaque artificial constructions. 

No.  There is an uncountable infinity of artificial constructions.  And the most challenging and fruitful questions usually come from the engineering side.  Very often the engineers are ahead of the scientists in coming up with temporary fixes to solve unforeseen problems.

The Google programmers (AKA engineers) faced difficult problems in machine translation.  Some clever mathematicians suggested tensor calculus.   That suggestion led to tensor calculus.  

Just look at Archimedes, whose Eureka! insight came from a problem posed by a king who wanted to know if his crown was made of solid gold.  The answer came while Archimedes was taking a bath.   And note that the fundamental question that led to the answer was proposed by an amateur -- a king, not an engineer or another scientist.

Similar examples came from the engineering questions in the developments from Copernicus to Galileo to Kepler to Newton and Leibniz.  There were far more questions and heated debates than scientific answers.

Similar issues occurred in the history of quantum mechanics from Planck to Einstein to Bohr to Schrödinger to Dirac to many, many, debates.  The most important questions came from practical problems in engineering or common sense.  Examples:  light passing through one slit or two slits.  Light coming from a heated lump of metal.   Particles of dust in a glass of water.  A train (or any object) accelerating to the speed of light.

For LLMs, the basic science came from Google's scientists who were developing better methods for machine translation.  The big applications came from thousands of hackers playing games.  They were asking questions about playing games.  They made a great advance in games.   But not so great an advance in fundamental science.

Now is the time to ask deeper questions.

John

Alex Shkotin

unread,
Oct 7, 2023, 3:34:30 AM10/7/23
to ontolo...@googlegroups.com
JFS: "Now is the time to ask deeper questions."
Exactly, and these questions should be scientific :-)
And we have a scientific phase with these creatures, GenAI in general and LLM in particular: experiments ;-)

Alex

пт, 6 окт. 2023 г. в 23:46, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Oct 7, 2023, 6:36:14 PM10/7/23
to ontolo...@googlegroups.com, CG, Peirce List
Alex,

I'm glad that we finally agree.   The main problem with the LLM gang is that they don't ask the fundamental questions:  How is this new tool related to the 60+ years of R & D in AI, computer science, and the immense area of the multiple cognitive sciences?

For example, Stanislas Dehaene and his students and colleagues have shown that there are multiple languages of thought, not just one. And every method of thinking has a different view of the world, of life, and of the fundamental methods of thought.  For example, thinking and working with and about mathematics, visual structures, music, games, gymnastics, flying an airplane, building a bridge, plowing a field, etc., etc., etc. activate totally different areas of the brain than speaking and writing English.

A brain lesion that knocks out one region may leave other regions unscathed, and it may even enhance performance in those other regions.  The LLM gang knows nothing about these issues.  They don't ask the right questions.  In fact, they're so one-sided that they don't even know what questions they should be asking.  Somebody has to educate them.  The best way to start is for us to ask the embarrassing questions.

Just before I read your note, I came across another article by the Dehaene gang:   https://www.science.org/doi/pdf/10.1126/sciadv.adf6140  
  
Does the visual word form area split in bilingual readers?   
Minye Zhan, Christophe Pallier, Aakash Agrawal, Stanislas Dehaene, Laurent Cohen
 
In expert readers, a brain region known as the visual word form area (VWFA) is highly sensitive to written words, exhibiting a posterior-to-anterior gradient of increasing sensitivity to orthographic stimuli whose statistics match those of real words. Using high-resolution 7-tesla fMRI, we ask whether, in bilingual readers, distinct cortical patches specialize for different languages. In 21 EnglishFrench bilinguals, unsmoothed 1.2-millimeters fMRI revealed that the VWFA is actually composed of several small cortical patches highly selective for reading, with a posterior-to-anterior word-similarity gradient, but with near-complete overlap between the two languages. In 10 English-Chinese bilinguals, however, while most word-specific patches exhibited similar reading specificity and word-similarity gradients for reading in Chinese and English, additional patches responded specifically to Chinese writing and, unexpectedly, to faces. Our results show that the acquisition of multiple writing systems can indeed tune the visual cortex differently in bilinguals, sometimes leading to the emergence of cortical patches specialized for a single language. 

This is just one of many studies that show why LLMs based on English may be inadequate for ways of thinking in other languages or in non-linguistic or pre-linguistic ways of thinking, working, living, etc.    Furthermore, language is a left-brain activity, and most of our actions and ways of behaving and working are right-brain activities.   The current LLMs are based on ways of thinking by an English speaker whose right brain was destroyed by a stroke.

None of the writings about LLMs ask or even mention these issues.  In this mini-series on generative AI, we have to ask the embarrassing questions.   Any science that avoids such questions is brain dead.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Oct 8, 2023, 3:59:50 AM10/8/23
to ontolo...@googlegroups.com, CG, Peirce List

John,


English LLM is the flower on the tip of the iceberg. Multilingual LLMs are also being created. The Chinese certainly train more than just English-speaking LLMs. You can see the underwater structure of the iceberg, for example, here https://huggingface.co/datasets (1).

Academic claims against inventors are possible. But you know the inventors: it works!


It's funny that before that hype LLM meant Master of Laws:-)


Alex



(1)



вс, 8 окт. 2023 г. в 01:36, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Oct 8, 2023, 5:23:25 PM10/8/23
to ontolo...@googlegroups.com, CG, Peirce List
Alex,

Thanks for the list of applications of LANGUAGE-based LLMs.  It is indeed impressive.  We all agree on that.  But mathematics, physics, computer science, neuroscience, and all the branches of cognitive science have shown that natural languages are just one of an open-ended variety of left-brain ways of thinking.   LLMs haven't scratched the surface of the methods of thinking by the right brain and the cerebellum. 

The left hemisphere of the cerebral cortex has about 8 billion neurons.  The right hemisphere has another 8 billion neurons that are NOT dedicated to language.  And the cerebellum has about 69 billion neurons that are organized in patterns that are totally different from the cerebrum.   That implies that LLMs are only addressing 10% of what is going on in the human brain.  There is a lot going on in that other 90%.   What kinds of processes are happening in those regions?

Science makes progress by asking QUESTIONS.  The biggest question is how can you handle the open-ended range of thinking that is not based on natural languages.  Ignoring that question is NOT scientific.  As the saying goes, when the only tool you have is a hammer, all the world is a nail.  We need more tools to handle the other 90% of the brain -- or perhaps updated and extended variations of tools that have been developed in the past 60+ years of AI and computer science.

I'll say more about these issues with more excerpts from the article I'm writing.  But I appreciate your work in showing the limitations of the current LLMs.

John 
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Stephen Young

unread,
Oct 8, 2023, 7:13:30 PM10/8/23
to ontolo...@googlegroups.com, Stephen Young
John, we've known since the 50s that the right brain has a significant role in understanding language.  We also know that there is a ton of neural real estate between Wernicke's and Broca's areas that must be involved in language processing.  They're like the input and output layers of the 98-layer GPT model.  And we call them large language models, but they also "understand" vision.

Using our limited understanding of one black box to try to justify our assessment of another black box is not going to get us anywhere. 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


--
Stephen Young


John F Sowa

unread,
Oct 8, 2023, 11:30:34 PM10/8/23
to ontolo...@googlegroups.com, CG
Stephen.

The six branches of the cognitive sciences (Philosophy, psychology, linguistics, AI, neuroscience, and anthropology) have an open-ended variety of unanswered questions.  That is the nature of every active branch of science.  The reason why researchers in those six sciences formed the coalition called cognitive science is that cutting-edge research in each of them has strong implications and valuable results for each of the others.  In fact, prominent leaders in AI were very active in founding the Cognitive Science Journal and conferences.

There is a huge amount of fundamental research about the multiplicity of very different "languages" of thought.  These results are well established with solid evidence about the influences.  Natural languages are valuable for communication, but they are not the best or even the most general foundation for thinking about most of the things we do in our daily lives -- or in our most complex activities.

You can't fly an airplane, drive a truck, thread a needle, paint a picture, ski down a mountain, or solve a mathematical problem if you have to talk to yourself (vocally or silently) about every detail.  You might do that when you're first learning something, but not when you master the subject

Compared to those results, the writings by many prominent researchers on LLMs are naive.    They know how to play with LLMs, but they don't know how to solve the very serious tasks that AI researchers have been implementing and using successfully for years.  As just some examples that my colleagues and I have implemented successfully,  see https://jfsowa.com/talks/cogmem.pdf 

Look at the examples in the final section (slides 44 to 64).   The current LLM technology cannot even begin to meet the requirements that the VivoMind technology could implement in 2010.   Nobody writing about LLMs can show how to handle those requirements by using LLMs.

And those examples are just a small sample of successful applications.  Most of the others were proprietary for our customers, who did not want to have their solutions publicized.  That was fundamental science applied to mission-critical applications.  

John
 


From: "Stephen Young" <st...@electricmint.com>
Sent: 10/8/23 7:13 PM
To: ontolo...@googlegroups.com, Stephen Young <st...@electricmint.com>
Subject: Re: [ontolog-forum] Addendum to (Generative AI is at the top of the Hype Cycle. Is it about to crash?

Alex Shkotin

unread,
Oct 9, 2023, 4:55:22 AM10/9/23
to ontolo...@googlegroups.com, CG, Peirce List

John,


My limited knowledge of LLM is as follows:

-The basis of LLM is ANN, which works with numbers and not words. Everything that is built in it depends solely on the training data on which it is trained.

-the number of neurons in the first layer is now tens and hundreds of thousands, and this is quite enough to start processing images.

-all work from input to output and inside the ANN is done with rational numbers in the range from approximately 0 to 1.

-transformation of text into a chain of numbers occurs by dividing it into tokens that only roughly resemble syllables.

-to set the ANN order of words, special tokens are introduced that approximately indicate the position of the word in the text.

-ANN itself is absolutely deterministic and in order to achieve non-determinism they resort to special tricks.

and so on.


LLM is only one of the applications of ANN and only one of the types of models, of which there are 353,584 in https://huggingface.co/models.


Looking at how the brain works, you can certainly be inspired to ask questions about ANNs and models. For example, consider the following dialogue:

“Where is cerebellum in ANN?” - “What is it doing?” - "Works with patterns!" - “So we only work with them!”


My proposal: let’s first agree that ANN is far from being only an LLM. LLM is by far the most noisy and unexpected of ANN applications.

The question can be posed this way: we know about the Language Model, but what other models using ANN exist?


Have a look at this model https://huggingface.co/Salesforce/blip-image-captioning-large 


Alex



пн, 9 окт. 2023 г. в 00:23, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

arwesterinen

unread,
Oct 9, 2023, 2:36:44 PM10/9/23
to ontolog-forum
Dan, I agree with the position of using LLMs wherever they are appropriate, researching the areas where they need more work, supplementing them where other technologies are strong, and (in general) "not throwing the baby out with the bath water".

That is why the Fall Series sessions from mid-October to early-November are focused on use cases and demonstrations. 

No technology is perfect and each is appropriate for various uses.

I encourage readers to listen with an open mind. I do not expect LLMs to do brain surgery or handle complex emergency situations. I do expect them to be helpful, as I expect ontologies and KGs to be helpful.

Andrea

doug foxvog

unread,
Oct 9, 2023, 9:43:51 PM10/9/23
to ontolo...@googlegroups.com
On Sun, October 8, 2023 17:23, John F Sowa wrote:
> Alex,
>
> Thanks for the list of applications of LANGUAGE-based LLMs. It is indeed
> impressive. We all agree on that. But mathematics, physics, computer
> science, neuroscience, and all the branches of cognitive science have
> shown that natural languages are just one of an open-ended variety of
> left-brain ways of thinking. LLMs haven't scratched the surface of the
> methods of thinking by the right brain and the cerebellum.

> The left hemisphere of the cerebral cortex has about 8 billion neurons.
> The right hemisphere has another 8 billion neurons that are NOT dedicated
> to language. And the cerebellum has about 69 billion neurons that are
> organized in patterns that are totally different from the cerebrum. That
> implies that LLMs are only addressing 10% of what is going on in the human
> brain. There is a lot going on in that other 90%. What kinds of
> processes are happening in those regions?

Note that much of the left hemisphere has nothing to do with language. In
front of the language areas are strips for motor control of & sensory
input from the right side of the body. The frontal lobe forward of those
strips do not deal with language. The occipital lobe at the rear of the
brain does not deal with language, either. The visual cortex in the
temporal lobe also does not deal with language. This means that most of
the 8 billion neurons in the cerebral cortex have nothing to do with
language.

-- doug foxvog


> Science makes progress by asking QUESTIONS. The biggest question is how
> can you handle the open-ended range of thinking that is not based on
> natural languages. Ignoring that question is NOT scientific. As the
> saying goes, when the only tool you have is a hammer, all the world is a
> nail. We need more tools to handle the other 90% of the brain -- or
> perhaps updated and extended variations of tools that have been developed
> in the past 60+ years of AI and computer science.
>
> I'll say more about these issues with more excerpts from the article I'm
> writing. But I appreciate your work in showing the limitations of the
> current LLMs.
>
> John
>
> ----------------------------------------
> From: "Alex Shkotin" <alex.s...@gmail.com>
>
> John,
>
> English LLM is the flower on the tip of the iceberg. Multilingual LLMs are
> also being created. The Chinese certainly train more than just
> English-speaking LLMs. You can see the underwater structure of the
> iceberg, for example, here https://huggingface.co/datasets (1).
> Academic claims against inventors are possible. But you know the
> inventors: it works!
>
> It's funny that before that hype LLM meant Master of Laws:-)
>
> Alex
>
> (1)
>
> --

John F Sowa

unread,
Oct 9, 2023, 11:12:47 PM10/9/23
to ontolo...@googlegroups.com, CG
Andrea, Dan, Doug, Alex,

As I keep repeating, I am enthusiastic about the ongoing research on generative AI and the LLMs that support it.  But as I also keep repeating, it's impossible to understand the full potential of any computational or reasoning method without understanding its limitations.

I explicitly address that issue for my own work.  In my first book, Conceptual Structures, the final chapter 7 had the title "Limits of Conceptualization".   Following is the opening paragraph:  "No theory is fully understood until its limitations are recognized. To avoid the presumption that conceptual mechanisms completely define the human mind, this chapter surveys aspects of the mind that lie beyond (or perhaps beneath) conceptual graphs. These are the continuous aspects of the world that cannot be adequately expressed in discrete concepts and conceptual relations."

One of the reviewers, who wrote a favorable review of the book, said that he was surprised that Chapter 7 refuted everything that went before.  But actually, it's not a refutation.  It just itemizes the many complex issues about human thought and thinking that go beyond what can be handled by conceptual graphs (and related AI methods, such as semantic networks and knowledge graphs).   Those are very important research areas, and it's essential to understand what can and cannot be done with current technology.  For a copy of that chapter, see https://jfsowa.com/pubs/cs7.pdf 

As another example, the AI Journal devoted an entire issue in 1993 to a review of a book on Cyc by Lenat & Guha.  Lenat told me that my review was the most accurate, but it was also the most frustrating because I itemized all the difficult problems that they had not yet solved.  Following is a copy of that review:  https://jfsowa.com/pubs/CycRev93.pdf 

Lenat did not hold that review against me.  In 2004, the DoD, which had invested a great deal of funding in the Cyc project held a 20-year evaluation of the Cyc project to determine whether and how much should they continue to invest.  And Lenat recommended me as one of the members of the review committee.  Our unanimous review was that (1) Cyc had developed a great deal of important research, which should be documented and made available to the public; (2) future development of Cyc should be funded mostly by commercial applications of Cyc  technology; (3) government funding should be continued during the documentation stage and during the transition to funding by applications.  Those goals were achieved, and Cyc continued to be funded by applications for another 19 years. 

So when I write about the limitations of generative AI and the LLM technology, I am writing exactly what must be done in any review of any project of any kind.  A good review of any development must ALWAYS evaluate the strengths and limitations.

But many (most?  all?)  of the people who are working on LLMs don't ask questions about the limitations.  For example, I have a high regard for Geoffrey Hinton, who has been one of the most prominent pioneers in this area.  But in an interview on 60 Minutes last Sunday, he said nothing about the limitations.  He even suggested that there were no limits.   For that interview, see https://www.cbs.com/shows/video/L25QUOdr6apMNr0ZWqDBCo9uPMd_SBWM/

As a matter of fact, many of the limitations I discussed in cs7.pdf also apply to the limitations of LLMs.  In particular, they are the limitations of representing and reasoning about the continuous aspects of the world and their translations to and from a discrete, finite vocabulary of any language, natural or artificial.

Andrea>  I agree with the position of using LLMs wherever they are appropriate, researching the areas where they need more work, supplementing them where other technologies are strong, and (in general) "not throwing the baby out with the bath water".

Yes indeed.

Dan>  The ability of these systems to engage with human-authored text in ways highly sensitive to their content and intent is absolutely stunning. Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance.

I certainly agree.  I'm not asking anybody to stop doing their R & D.  But I am asking people who promote LLMs to look at where they are running up against the limits of current versions and what can be done to go beyond those limits.

Doug F> Note that much of the left hemisphere has nothing to do with language.  In front of the language areas are strips for motor control of & sensory input from the right side of the body.  The frontal lobe forward of those strips do not deal with language.  The occipital lobe at the rear of the brain does not deal with language, either.  The visual cortex in the temporal lobe also does not deal with language.  This means that most of the 8 billion neurons in the cerebral cortex have nothing to do with language.  

I agree with that point.  But I believe that the LLM proponents would also agree.  They would say that those areas of the cortex are necessary for mapping language-based LLMs to and from perception and action.  What they fail to recognize is the importance of the 90% of the neurons that do not do anything directly related to language.

Alex> My proposal: let’s first agree that ANN is far from being only an LLM. LLM is by far the most noisy and unexpected of ANN applications.  The question can be posed this way: we know about the Language Model, but what other models using ANN exist?

I agree that we should explore the many ways that artificial NNs relate to the NNs in various parts of the brain.   It's also important to recognize that there are many difference kinds of NNs in different areas of the brain, and they are organized in ways that are very different from the currently popular ANNs. 

In summary, there is a lot more research that remains to be done.  I'm not telling anybody to stop what they're doing.  I'm just recommending that they look at what more needs to be done before claiming that LLMs can do everything.

John

alex.shkotin

unread,
Oct 10, 2023, 5:44:40 AM10/10/23
to ontolog-forum

John,


I forgot to mention another interesting LLM trick: If ANN issues only one token for each call, how do we get a text response from LMM that sometimes contains a hundred words?

Answer: They access the ANN in a loop, adding the previous answer token to the next input until they hit a completion token, such as an end-of-sentence token.


Alex



вторник, 10 октября 2023 г. в 06:12:47 UTC+3, John F Sowa:

Stefan Decker

unread,
Oct 10, 2023, 10:48:17 AM10/10/23
to ontolo...@googlegroups.com, CG, Peirce List
Hi Dan,

for what it's worth - I agree with you.
Of the limited resources I command I am spending a million Euros in institute reserves to prepare the 500 people institute I am directing for what is coming.
And the topic has reached the highest levels of government in Germany.
I assume the same is true for many other countries.
But nobody can predict the future - we can only prepare or shape it.
That being said - the interplay between Knowledge Graphs and language models seems to be an interesting playing field. I can only say that we are exploring a number of the connections already.

My hope is that language models are doing all the work for us in making data structured, so that it can be processed at scale. As a simple example, think of doctors reports from cancer patients being available only as free text. With nobody having the resources to turn those into structured data, so that they can be analysed.
I hope we found something willing and able to do this - once we figure out the challenges :-)
And yes, ontologies play a role here. But not for reasoning.

Best regards,

Stefan

On Tue, 3 Oct 2023 at 10:23, Dan Brickley <dan...@danbri.org> wrote:
Frankly, this is just getting silly. 

The ability of these systems to engage with human-authored text in ways highly sensitive to their content and intent is absolutely stunning. Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance. 

Kingsley Idehen

unread,
Oct 10, 2023, 5:01:02 PM10/10/23
to ontolo...@googlegroups.com

Yes, because we really need to understand what leads to confusing behavior -- even in the hands of skilled operators.

Transcript from a strange ChatGPT session.

https://netid-qa.openlinksw.com:8443/chat/?chat_id=s-48sBd4DwHJM7yxfU56bbc5bnnAran1K4qinKWrfKvNHy .

Issue:

Contradicts its own session configuration along the way.

Setup:

ChatGPT sand-boxed in a Virtuoso-hosted application, courtesy of external function callbacks. The external functions are actually DBMS-hosted SQL stored procedures :)

Kingsley Idehen

unread,
Oct 11, 2023, 8:18:08 AM10/11/23
to ontolo...@googlegroups.com, Stefan Decker, CG, Peirce List
Hi Stephan,

On 10/10/23 10:48 AM, Stefan Decker wrote:
> My hope is that language models are doing all the work for us in
> making data structured, so that it can be processed at scale. As a
> simple example, think of doctors reports from cancer patients being
> available only as free text. With nobody having the resources to turn
> those into structured data, so that they can be analysed.
> I hope we found something willing and able to do this - once we figure
> out the challenges :-)
> And yes, ontologies play a role here. But not for reasoning.


Yes, that's the heart of the matter regarding the intersection of
LLM-based natural language processors & code generators and Knowledge
Graphs.

Circa 2023, we shouldn't be hand-crafting structured data. Instead, we
should simply be reviewing and curating what's generated :)
Reply all
Reply to author
Forward
0 new messages