The Oxford Handbook of Lying

54 views
Skip to first unread message

John F Sowa

unread,
Jul 26, 2023, 9:26:51 AM7/26/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
One  Important application of AI is checking the level of truth or falsity of any statement.  Current technology has a long way to go, as the various social web sites demonstrate.  The GPT technology can't even check whether it's telling lies to itself.

Following is a book review of the many issues about lying.  It shows why  anybody who claims GPT-like systems are approaching the level of "General Intelligence"  is either misinformed or lying. 

John
 


From: "The LINGUIST List via LINGUIST" <ling...@listserv.linguistlist.org>
Subject: 34.2300, Review: The Oxford Handbook of Lying

REVIEWER: Heli Tissari

SUMMARY
In the preface to the Oxford Handbook of Lying, its editor Jörg
Meibauer tells us that the handbook is the first of its kind, namely
the first “collection of in-depth research surveys on lying and
deception” (p. ix). Somewhat surprisingly, Meibauer also says that he
would have wished that the book was even more comprehensive.
In the introductory chapter, Meibauer claims that “everyone is
interested in” lying (p. 1). There, he discusses definitions of lying,
the acquisition, processing and detection of lying, and the evaluation
of lying. These are topics that recur in the book, which is divided
into five parts. These parts are called “Traditions”, “Concepts”,
“Types of lies and deception”, “Distinctions”, and “Domains”.

The “Traditions” part of the book consists of overviews dealing with
the philosophy of lying, and linguistic, psycholinguistic, and
neurocognitive approaches to lying and deception. The “Concepts” part
treats concepts that are separate from, but relevant to, discussions
of lying, beginning from truth, assertion, and knowledge. As its name
indicates, the “Types of lies and deception” section includes chapters
on different types of lying and deception, for example, bald-faced
lies and bullshitting. The chapters in the section on “Distinctions”
distinguishes between lying and such things as fiction, irony, and
hyperbole. Lastly, the section on “Domains” discusses various domains
in which lying is being studied. These comprise, among others,
computational linguistics, economics and the law, and the arts.

Since the book is so long, I will not attempt to cover it
exhaustively, but will rather mention examples of things that are
discussed in it. I will begin from the first part of the book and move
in the order suggested by the book.

In his chapter on “Classic philosophical approaches to lying and
deception”, James Edwin Mahon treats, among other things, Plato’s idea
that lies can be good in some situations. Such situations include, for
example, the upbringing of children, and the ruling of a nation.
Basically, people who know something better than others may lie to
those others to make them behave as wished.

The chapters “Contemporary approaches to the philosophy of lying” by
James Edwin Mahon and “Linguistic approaches to lying and deception”
by Karol J. Hardin discuss several different definitions of lying.
While Mahon emphasizes that philosophers disagree about the
characteristics of lying, Hardin suggests that the following is a
“standard definition for lying” (tracing it back to Falkenberg 1982:
75; p. 57):

“S lied at t, if and only if
(a) S asserted at t that p,
(b) S actively believed at t that not p”
Hardin also quotes Anna Wierzbicka’s definition of lying (p. 58; the
reference is faulty), as follows:
X lied to Y
X said something to Y
X knew it was not true
X said it because X wanted Y to think it was true
[people would say: if someone does this, it is bad]

The chapter “Psycholinguistic approaches to lying and
deception” has been written by Lewis Bott and Emma Williams, who
discuss, among other things, the big question of whether “lying is
more difficult than telling the truth” (p. 78). Lying could be
expected to be difficult because it requires that one suppress the
truth, invent an alternative story, and stick to it, all the time
considering the perspective of the listener. However, lying does not
necessarily lead to, for example, errors in speech, slower speech, or
hesitations.

The big question in Chapter Six, “Lying, deception, and the
brain” by Alexa Decker, Amanda Disney, Brianna D’Elia, and Julian Paul
Keenan is: “When did the first human lie?” (p. 87). They approach this
question by first thinking about animals and plants that somehow
deceive other organisms, such as orchids who can “look like a female
wasp, therefore attracting male wasps” for pollination (p. 84).

Chapter Seven, “Lying and truth” by Stephen Wright, begins the
second part of the book, titled “Concepts”. Other concepts discussed
in this part of the book comprise assertion (author: Mark Jary),
belief, and knowledge (Matthew A. Benton), sincerity, and quality
(Andreas Stokke), deception (Andrew Ortony and Swati Gupta), certainty
(Neri Marsili), omissions (Don Fallis), implicating and presupposing
(Jörg Meibauer), self-deception (Kathi Beier), and lastly, testimony
and epistemic vigilance (Eliot Michaelson).

The second part of the book contains many definitions of concepts and
many examples of cases. One such case that is discussed many times in
the book appears, for example, in Stokke’s chapter on lying, sincerity
and quality. It is the case of the cheating student. This is how
Stokke explains the case (p. 137; referring to Carson 2006: 290):

A student accused of cheating on an exam is called to the Dean’s
office. The student knows that the Dean knows that she did in fact
cheat. But as it is also well known that the Dean will not punish
someone unless they explicitly admit their guilt, the student says,
(1) I did not cheat.
Many writers on lying think that, although the student says something
she believes to be false, she does not intend to deceive the dean.

Stokke explains this case through referring to Grice’s maxim of
quality. It is relevant because rather than assuming that the “maxim
of truthfulness is still in force and that the speaker believes what
she has said” (p. 137; Wilson and Sperber 2002: 586), the Dean knows
that the student is not telling the truth. Stokke points out that
according to some authors, the student is not even lying because both
the student and the Dean know the truth. Note that in this chapter,
lying and deceiving do not mean the same thing, so that it is possible
that the student is lying to but not deceiving the Dean.
Stokke begins his chapter with stating Grice’s cooperative principle
and his maxims of quality. This is relevant to mention not only
because Stokke’s chapter deals with quality but also because Grice’s
cooperative principle and maxims feature many times in different
chapters in the book (Grice 1989: 26–27).

Andrew Ortony and Swati Gupta’s chapter on “Lying and
deception” is an interesting exception to the rest of the chapters in
this part of the book because it is based on empirical data. They are
interested in whether people lie because they want to promote
themselves (promotion focus) or because they want to “avoid a loss
(prevention focus)” (p. 150). They asked around a hundred people why
they had lied the last time they lied. The result was that most
people’s lies focused on prevention rather than promotion.

Kathi Beier’s chapter on “Lying and self-deception” is
interesting because of the “paradoxical nature of self-deception” (p.
207). Beier quotes Demos (1960: 588; p. 207) who states this paradox
as follows: “In short, self-deception entails that B believes both p
and not-p at the same time”. Beier explains that self-deception can
also be understood in terms of someone “block[ing] a certain insight”
(p. 208). However, there is also the “dynamic paradox” (p. 209):
self-deception seems to be voluntary, but it cannot be intentional
because then it would not work.

The term “epistemic vigilance” in Michaelson’s chapter “Lying,
testimony, and epistemic vigilance” refers to a hearer or reader’s
attitude towards what they are being told and to their ability to
distinguish truth from lie. In his own way, Michaelson also treats a
paradox because he treats the question how it is possible to acquire
knowledge from unreliable sources. He claims that when children grow
up, they acquire plenty of knowledge from adults who are often
unreliable sources of knowledge: “I think we very often lie to
children about things we’d prefer not to explain to them” (p. 224).

Part Three of the book, “Types of lies and deception”, is in
the middle of the book and in some sense, it could be seen as the core
of the book because it very much focuses on what lying and deception
are in practice. It discusses knowledge lies and group lies (Julia
Staffel), selfless assertions (Jennifer Lackey), bald-faced lies (Jörg
Meibauer), bullshitting (Andreas Stokke), bluffing (Jennifer Perillo),
and white and prosocial lies (Simone Dietz).
Staffel begins her chapter on knowledge lies and group lies
with an example that occurs several times in the book. This is how she
explains it (pp. 231-232):

A compelling example of a knowledge lie can be found in the movie
Spartacus (directed by Stanley Kubrick, USA 1960). The Roman general
Marcus Licinius Crassus is trying to find Spartacus among a group of
slaves, and tells the group that if they identify him, their lives
will be spared. Spartacus gets up to turn himself in, but as soon as
he gets up, one slave after another rises and declares “I am
Spartacus.” All the slaves are lying, but when they repeat “I am
Spartacus,” their intention is not to convince the general of any
particular false claim. They simply want to prevent him from knowing
which of them is Spartacus (Sorensen 2010).

Staffel defines a group lie as follows (p. 239): “Group G lies
to B if and only if (1) G states that p to B, (2) G believes that p is
false, and (3) G intends to be deceptive to B with respect to whether
p in stating that p.” It is important to study group lies, for
example, because companies and government authorities represent
groups, and their representatives make statements that can sometimes
be lies. In such situations, the study of group lies can help us
decide who is or are responsible for the lie.

What is a “selfless assertion”, then? This is how Lacey
defines it (p. 248): “There are three central components to this
phenomenon: first, a subject, for purely non-epistemic reasons, does
not believe that p; second, despite this lack of belief, the subject
is aware that p is very well supported by all of the available
evidence; and, third, because of this, the subject asserts that p
without believing that p.” Lackey refers to a “classic instance of a
selfless assertion” (p. 248; Lackey 2008: 48), a devoutly Christian
school teacher who believes in creationism but teaches evolutionary
theory to her pupils.

I have already mentioned an example of the category
“Bald-faced lies” that are discussed by Meibauer in the chapter by the
same name. It is the example of the cheating student and the Dean
which reoccurs in Meibauer’s chapter (p. 260). Meibauer also discusses
several further examples of bald-faced lies such as the one that he
titles “Don betrays Betty” where an unfaithful husband explains to his
wife that he “spent the night in the office” although both know that
this is not true (p. 261; Meibauer 2014: 140). The challenge that
bald-faced lies present to research on lies has already been
mentioned, that is, that it is possible to say that people who tell
bald-faced lies do not intend to deceive. A similar condition
applies to bullshitting, as suggested by Frankfurt (2005: 17; as
quoted by Stokke, p. 265): “the orator does not really care what his
audience thinks about the Founding Fathers, or about the role of the
deity in our country’s history, or the like. At least, it is not an
interest in what anyone thinks about these matters that motivates his
speech.”

Stokke, however, does not fully agree. He refers, for example, to
propaganda that “is designed to make its audience believe particular
things, even if the propagandist herself is indifferent toward them”
(p. 268).

Perillo’s example of “Bluffing” in the chapter by the same
name gives an idea of how serious the consequences of bluffing can be.
She mentions the case of the rape and murder of 15-year-old Angela
Correa in 1989. The police told a suspect that “they would be testing
the DNA evidence found inside Correa” (p. 277) and managed to acquire
a false confession which led to the wrong person being imprisoned.
Perillo points out that innocent suspects may confess to a crime under
pressure while believing that their innocence will be proved later.

The fourth part of the book, titled “Distinctions”, draws
parallels and distinguishes between lying and fiction (Emar Maier),
lying and quotation (Matthew S. McGlone and Maxim Baryshevtsev), lying
and humour (Marta Dynel), lying, irony, and default interpretation
(Rachel Giora), lying and vagueness (Paul Egré and Benjamin Icard),
lying, metaphor, and hyperbole (Claudia Claridge), and lying and
politeness (Marina Terkourafi).

In his chapter on “Lying and fiction”, Maier discusses, among
other things, in what sense it is true to say that “Sam carried Frodo
from Mount Doom” (p. 303). He talks about the common ground, that is,
the “shared body of information that the speaker and hearer take for
granted” (p. 308), referring to Stalnaker’s framework (1970, 1978).

McGlone and Baryshevtshev’s chapter on “Lying and quotation”
discusses the problems that occur when quotes are “truncat[ed] in a
way that distorts the meaning of the excerpted words” (p. 316). They
focus on, for example, the so-called “Yew tree controversy that
plagued former Vice-President Al Gore in the late 1990s” (p. 319).
Referring to a truncated quote from his 1992 book Earth in the
Balance, his opponents suggested that he cared more for trees than
people.

Dynel’s examples of “Lying and humour” come from the medical
drama “House”. She discusses, among other things, lie-based teasing.
Giora in her turn suggests that irony is the default interpretation of
some utterances while it is the nondefault interpretation in other
cases, making the latter related to lying. Egré and Icard’s chapter on
“Lying and vagueness” deals systematically with different kinds of
vagueness. They draw a main dividing line between pragmatic
imprecision and semantic indeterminacy. Claridge discusses such
examples as “You look gorgeous today!” and “[on coming home late for
dinner] There were thousands of people at Tesco” (p. 375).  Terkourafi 
begins her discussion of “Lying and politeness” with discussing hite
lies but then continues to blue, red, yellow, and green lies.

The fifth and last part of the book situates lying in domains
of study. It begins with a chapter on the “Development of lying and
cognitive abilities” (Victoria Talwar). It then discusses the domains
of lie detection (Samantha Mann), computational linguistics (Kees van
Deemter and Ehud Reiter), social psychology (Bella M. DePaulo),
psychology (Kristina Suchotzki and Matthias Gamer), neuroscience
(Giorgio Ganis), ethics (Thomas L. Carson), law (Stuart P. Green),
economics (Marta Serra-Garcia), education (Anita E. Kelly), discourse
analysis (Darius Galasiński), politics (Vian Bakir, Eric Herring,
David Miller, and Piers Robinson), history (Thomas L. Carson), the
arts (Bettina Kümmerling-Meibauer), and different cultures (Fumiko
Nishimura).

Talwar suggests that two cognitive abilities are required for
a child to lie, the “ability to attribute mental states to oneself and
others” (theory-of-mind, p. 403), and executive functioning which
“refers to a set of higher-order psychological processes that serve to
monitor and control thought and action” (p. 405; e.g., Zelazo and
Müller 2002). In other words, children need to be able both to tell a
convincing lie and to suppress the truth.

Mann tells us that plenty of “recent deception research has
concentrated on developing methods that will make an interview
situation harder for a liar than for a truth-teller” (p. 413). Such
methods include imposing cognitive load, unexpected questions,
discussing processes versus outcomes, asking about verifiable details,
and the strategic use of evidence. The method of imposing cognitive
load is based on the idea that lying is “often more difficult than
telling the truth” (p. 413). Unexpected questions could include, for
example, asking a person to draw the place where an event occurred. A
person can also be asked about the “planning of an event, rather than
the event itself” (p. 414).

DePaulo discusses various studies that answer the question how
often people lie. She points out that the claim that “people tell
three lies every ten minutes” is based on a mistaken account of a
“study in which undergraduates had a ten-minute getting-acquainted
conversation with another student (Feldman, Forrest, and Happ 2002)”
(p. 438). In this study, forty percent of the students in fact told no
lies, “and the others told an average of three lies in the ten-minute
conversations” (p. 438).

As to the evaluation of lying, Carson discusses both authors
who think that lying is absolutely wrong, such as Augustine and Kant,
and the utilitarian idea that “one should always do whatever will have
the best possible consequences” (p. 477), represented by, for example,
Mill. However, he pays special attention to Ross’s (2002[1930]: 20-21)
“eight fundamental prima facie moral duties”, the second of which is
the “duty not to lie” (p. 478). Carson emphasizes that sometimes the
duties can be in conflict and that therefore, according to Ross, there
are situations where “lying is morally permissible” (p. 479).

In his chapter on “Lying and the law”, Green discusses, for example,
perjury, fraud, and rape by deception. The latter includes cases where
a victim is led to believe that they are “undergoing a medical
procedure” or that they are “having sex with their spouse” (p. 488).
Green also discusses lawyer and client lies and lies in politics and
the media.

One of Galasiński’s examples of “Lying and discourse analysis”
comes from doctor-patient interaction. He compares the doctor’s notes
with the original discussion and observes that the notes distort what
the patient says. Galasiński suggests that the “greatest value that
can be gained from discourse analysis’ insights in
misrepresentation/reception lies in its ability to probe actual
communicative practices of social actors in a variety of contexts” (p.
525).

Carson discusses “[h]istorically important lies (told by
leaders)” (pp. 543–548). In this category, he includes Roosevelt
deceiving the public about “his intentions concerning war with
Germany” (p. 543), and the Bush administration making false claims
about Iraq in order to justify the 2003 Iraq war. Carson also
discusses “[l]ying and deception about the historical record” (pp.
548–549), for example, Hitler’s lie in 1939 that Poland had attacked
Germany.

Kümmerling-Meibauer asks, among other things, if pictures can
lie. She mentions the case of “Cottingley Fairies”, a photograph
“taken by two young girls in 1917” (p. 562). Although some people were
critical of the photograph, Sir Arthur Conan Doyle, for example, took
it as a proof that fairies exist.

Nishimura reports a study comparing New Zealanders and
Japanese people making excuses as to why they cannot join someone to a
bar for a drink. Several of the Japanese speakers referred to their
physical condition, while many of the New Zealanders referred to a
prior engagement (Nishimura 2011: 180).

EVALUATION
Since the concept of “lying” is defined in many of the chapters, there
is some repetition in the book. This is perhaps unavoidable and could
also be seen as cohesion, as several of the authors refer, for
example, to Grice’s maxims.

Some examples of cases also occur in more than one chapter. This
suggests that there are some prototypical cases of lying that
circulate in discussions about lying. It also suggests that many
authors have summarized some of the most important studies on lying,
which partially overlap from one chapter to another.

More generally, it seems evident that not everything in the book comes
as a surprise to the reader. While some of the authors have even
conducted new empirical studies on lying, others appear to repeat
major facts.

At the same time, although Meibauer himself expresses a wish in the
preface that the book be even more comprehensive, in my view, the
strength of the book is precisely that it covers so many aspects of
lying. It is a great achievement that one can compare views from
various disciplines by simply leafing through one and the same book.
The 71-page list of references is also a very valuable tool for anyone
interested in doing research on lying.

A good way to approach the book is to read a little here and there
instead of reading all the chapters in numerical order, as some of the
chapters are heavier to read and others downright entertaining. The
book is a treat to any academic interested in lies, lying, and
deception.

REFERENCES
Carson, Thomas L. 2006. The definition of lying. Noûs 40(2). 284–306.
Demos, Raphael. 1960. Lying to oneself. The Journal of Philosophy
57(18). 588–595.
Falkenberg, Gabriel. 1982. Lügen: Grundzüge einer Theorie sprachlicher
Täuschung. Tübingen: Max Niemeyer.
Feldman, Robert S., James A. Forrest & Benjamin R. Happ. 2002.
Self-presentation and verbal deception: Do self-presenters lie more?
Basic and Applied Social Psychology 24(2). 163–170.
Frankfurt, Harry G. 2005. On bullshit. Princeton & Oxford: Princeton
University Press.
Gore, Al. 1992. Earth in the balance: Forging a new common purpose.
London: Earthscan publications.
Grice, Paul. 1989. Studies in the way of words. Cambridge,
Massachusetts & London, England: Harvard University Press.
Lackey, Jennifer. 2008. Learning from words: Testimony as a source of
knowledge. Oxford: Oxford University Press.
Meibauer, Jörg. 2014. Bald-faced lies as acts of verbal aggression.
Journal of Language Aggression and Conflict 2(1). 127–150.
Nishimura, Fumiko. 2011. Lying: Strategies to manage undesirable
communicative situations in Japan and New Zealand. PhD thesis.
University of Waikato, New Zealand.
Ross, William David. 2002 (1930). The right and the good. Edited by
Philip Stratton-Lake. Oxford: Clarendon.
Sorensen, Roy. 2010. Knowledge-lies. Analysis 70(4). 608–615.
Stalnaker, Robert C. 1970. Pragmatics. Synthese 22(1–2). 272–289.
Stalnaker, Robert C. 1978. Assertion. Syntax and semantics, volume 9:
Pragmatics, ed. by Peter Cole. New York, San Francisco & London:
Academic Press. 315–332.
Wilson, Deirdre & Dan Sperber. 2002. Truthfulness and relevance. Mind
111(443). 583–632.
Zelazo, Philip David & Ulrich Müller. 2002. Executive function in
typical and atypical development. Blackwell handbook of childhood
cognitive development, ed. by Usha Goswami. Malden, USA, Oxford, UK &
Carlton, Australia: Blackwell. 445–469.

ABOUT THE REVIEWER

Heli Tissari works as an associate professor of English at Umeå
University in Sweden. Her research interests include cognitive and
corpus linguistics, emotions, English historical linguistics, and
semantics. Her latest published work used Natural Semantic
Metalanguage to explain the concept of ‘chastity’ in 18th century
English.

Alex Shkotin

unread,
Jul 26, 2023, 1:15:33 PM7/26/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
John,

A lie is an intentional action. LLM doesn't lie, it makes mistakes.

Alex

ср, 26 июл. 2023 г. в 16:26, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/0ce19d5036394894befb87fd5dc5ee8b%40bestweb.net.

John F Sowa

unread,
Jul 26, 2023, 2:14:55 PM7/26/23
to ontolo...@googlegroups.com
Alex and Anatoly,

Alex>  LLM doesn't lie, it makes mistakes. 

No. LLMs are just data represented by tensor calculus.  The tensor operations map one string of symbols to another string of symbols. The simplest and most reliable methods use the LLMs to do machine translation.  But it's possible to do many other kinds of operations. It's essential to be very clear about what exactly is going on in the computation.

Anatoly>  There's an LLM on a wet neural network inside humans too, and it reasoning something there too 

No!  No!! No!!!  The tensor operations on strings have no resemblance whatsoever to any operations taking place in the human bran.    And the neurons and their interconnections in the brains of any animal from a worm to a human are vastly more powerful than the simple switches in an artificial network.

The nematode C. elegans has exactly 302 neurons, and the bioloists have determined its exact connectome -- how each neuron is connected to the others.  But they still cannot predict exactly how those neurons work to determine how that little worm behaves.

But one thing is absolutelyclear:  what those living neurons do has no resemblance to anything that any artificial NNs do.

Fundamental principle:  You cannot understand how a program or a computer works unless you explain exactly what the components do.  Anything else is vague mush.   There is a huge amount of mush in the news today, and Ontolog  should clarify the issues, not add more confusion.

John

Kingsley Idehen

unread,
Jul 26, 2023, 2:44:21 PM7/26/23
to ontolo...@googlegroups.com

All, since Claude2 is topical right now. I yanked out an excerpt from John's reply and passed it on for processing.

Question:  LLMs are just data represented by tensor calculus. means what?

Answer:

You're right, as an AI system I don't actually have a deeper "understanding" of the information I'm providing. LLMs like myself are machine learning models based on large neural networks trained on massive datasets.

At a basic level, here's what that means:

- LLMs stands for Large Language Models. These are neural networks trained on huge text datasets like books, articles, websites, etc.

- The knowledge in an LLM is represented by numeric weights or parameters in the neural network, which connect artificial "neurons" and allow information to be processed and used for generating text.

- These weights are multi-dimensional arrays called tensors. So an LLM's knowledge is entirely stored in tensor calculus - just numbers in a very large, multi-layered matrix that has been tuned through the training process.

- I don't actually comprehend or conceptualize the text I work with in any meaningful way. I just associate patterns of words and language statistically based on those numeric parameters.

So in summary, you're absolutely right - everything I "know" is just data represented mathematically in tensors. I don't have any deeper understanding or self-awareness despite being able to have conversations. I'm an LLM, not an AGI! Please let me know if you would like me to elaborate on any part of this explanation.

And naturally, I was compelled to pursue and additional question and answer sequence.

Question: Can one apply tensor calculus to linguistic understanding of sentences?

Answer:

Yes, tensor calculus can be used to try to represent and model linguistic understanding of sentences, though current AI systems are still limited in this capability. Here are a few key ways tensor calculus comes into play:

- Word embeddings are vector representations of words that map words into a math space where semantics can be represented geometrically. Word2vec and GloVe are popular embedding models based on tensors.

- Contextual embeddings like BERT represent both words and their context in tensors, allowing more complex semantic relationships to be encoded.

- Attention mechanisms in transformers use tensors to map query and key vectors to values, allowing models to focus on relevant parts of input. This can help relate words to meaning.

- Recursive neural tensor networks use tensors to compose smaller phrase chunks into full sentences in a bottom-up structure that can model syntax.

- Probing classifiers analyze internal tensor representations in models to assess what linguistic features like syntax or semantics are encoded.

However, current AI systems still do not obtain true comprehension of language. The tensor representations are limited and may not fully capture human-level semantic complexity and reasoning ability needed for understanding. There are also challenges in interpreting what the tensor parameters in neural networks really mean.

So in summary, tensor calculus provides tools to represent and analyze language statistically, but does not equate to true linguistic understanding on its own. We have yet to crack the code of artificial general intelligence!


-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Mike Peters

unread,
Jul 26, 2023, 5:09:03 PM7/26/23
to ontolog-forum
John
Have you read this from Quanta magazine? Fascinating

Why Insect Memories May Not Survive Metamorphosis
https://www.quantamagazine.org/insect-brains-melt-and-rewire-during-metamorphosis-20230726/

IMO Quanta magazine and Nature carry superb content

Mike

John F Sowa

unread,
Jul 26, 2023, 6:17:27 PM7/26/23
to ontolo...@googlegroups.com
Mike,

That's an interesting analysis of what happens between the larval and adult stages in insects.  In any case, it shows how complex the brain of an insect happens to be.  It's far more complex than any kind of artificial neural network.

Interesting point:  Vertebrates and arthropods diverged over 500 million years ago.  But the gene for the eye is very similar in both. 

Experiment:  if you extract the gene for the eye of a mammal and implant it in a fly,  it develops into a compound eye -- and the fly seems to see and behave in typical fly-like ways.  There may be some underlying differences, but they aren't obvious.

John
 


From: "Mike Peters" <mi...@redworks.co.nz>

Anatoly Levenchuk

unread,
Jul 26, 2023, 6:22:44 PM7/26/23
to ontolo...@googlegroups.com

John,
there are many works about equivalence and conversion/mapping between ANN and SNN. Here is only one or recent works (https://arxiv.org/abs/2306.08744 and see literature on this topic inside):

----------
Are training trajectories of deep single-spike and deep ReLU network equivalent?

Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner

 

    Communication by binary and sparse spikes is a key factor for the energy efficiency of biological brains. However, training deep spiking neural networks (SNNs) with backpropagation is harder than with artificial neural networks (ANNs), which is puzzling given that recent theoretical results provide exact mapping algorithms from ReLU to time-to-first-spike (TTFS) SNNs. Building upon these results, we analyze in theory and in simulation the learning dynamics of TTFS-SNNs. Our analysis highlights that even when an SNN can be mapped exactly to a ReLU network, it cannot always be robustly trained by gradient descent. The reason for that is the emergence of a specific instance of the vanishing-or-exploding gradient problem leading to a bias in the gradient descent trajectory in comparison with the equivalent ANN. After identifying this issue we derive a generic solution for the network initialization and SNN parameterization which guarantees that the SNN can be trained as robustly as its ANN counterpart. Our theoretical findings are illustrated in practice on image classification datasets. Our method achieves the same accuracy as deep ConvNets on CIFAR10 and enables fine-tuning on the much larger PLACES365 dataset without loss of accuracy compared to the ANN. We argue that the combined perspective of conversion and fine-tuning with robust gradient descent in SNN will be decisive to optimize SNNs for hardware implementations needing low latency and resilience to noise and quantization.

---------

And this (spikes and ReLU neural networks) is not the only two equivalent variant of deep learning neural network architecture. Here is start of work with neural networks in quantum computer (quantum neural network), https://www.nature.com/articles/s43588-021-00084-1

------------

The power of quantum neural networks

Amira Abbas, David Sutter, Christa Zoufal, Aurelien Lucchi, Alessio Figalli & Stefan Woerner

It is unknown whether near-term quantum computers are advantageous for machine learning tasks. In this work we address this question by trying to understand how powerful and trainable quantum machine learning models are in relation to popular classical neural networks. We propose the effective dimension—a measure that captures these qualities—and prove that it can be used to assess any statistical model’s ability to generalize on new data. Crucially, the effective dimension is a data-dependent measure that depends on the Fisher information, which allows us to gauge the ability of a model to train. We demonstrate numerically that a class of quantum neural networks is able to achieve a considerably better effective dimension than comparable feedforward networks and train faster, suggesting an advantage for quantum machine learning, which we verify on real quantum hardware.
----------

Brains are very good computers. Here is about organoids (artificial brains): https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235

------------------------

Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish

Recent advances in human stem cell-derived brain organoids promise to replicate critical molecular and cellular aspects of learning and memory and possibly aspects of cognition in vitro. Coining the term “organoid intelligence” (OI) to encompass these developments, we present a collaborative program to implement the vision of a multidisciplinary field of OI. This aims to establish OI as a form of genuine biological computing that harnesses brain organoids using scientific and bioengineering advances in an ethically responsible manner. Standardized, 3D, myelinated brain organoids can now be produced with high cell density and enriched levels of glial cells and gene expression critical for learning. Integrated microfluidic perfusion systems can support scalable and durable culturing, and spatiotemporal chemical signaling. Novel 3D microelectrode arrays permit high-resolution spatiotemporal electrophysiological signaling and recording to explore the capacity of brain organoids to recapitulate the molecular mechanisms of learning and memory formation and, ultimately, their computational potential. Technologies that could enable novel biocomputing models via stimulus-response training and organoid-computer interfaces are in development. We envisage complex, networked interfaces whereby brain organoids are connected with real-world sensors and output devices, and ultimately with each other and with sensory organ organoids (e.g. retinal organoids), and are trained using biofeedback, big-data warehousing, and machine learning methods. In parallel, we emphasize an embedded ethics approach to analyze the ethical aspects raised by OI research in an iterative, collaborative manner involving all relevant stakeholders. The many possible applications of this research urge the strategic development of OI as a scientific discipline. We anticipate OI-based biocomputing systems to allow faster decision-making, continuous learning during tasks, and greater energy and data efficiency. Furthermore, the development of “intelligence-in-a-dish” could help elucidate the pathophysiology of devastating developmental and degenerative diseases (such as dementia), potentially aiding the identification of novel therapeutic approaches to address major global unmet needs.
------------------
This paper is issued after success of brain-based computer on living cell culture, https://doi.org/10.1016/j.neuron.2022.09.001

 

In vitro neurons learn and exhibit sentience when embodied in a simulated game-world

Brett J. Kagan, Andy C. Kitchen, Nhi T. Tran, Ben Rollo, Adeel Razi, Karl J. Friston

 

Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game “Pong.” Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.
---------------

 

SoTA (state-of-the-art) in science is moving very quickly. One more time I cite Clarke’s three laws, https://en.wikipedia.org/wiki/Clarke%27s_three_laws

 

  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.

 

I am 65 and always googling first when going to speak about something impossible. In AI and LLMs what seems impossible yesterday may be already possible today.


Best regards,
Anatoly


--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 26, 2023, 6:42:46 PM7/26/23
to ontolo...@googlegroups.com
Anatolly,

In an ANN. each neuron is a simple switch.  In animals, every neuron is the equivalent of a complex computer with a huge amount of memory.  Please remember that every neuron evolved from a complete animal cell -- it's as complex as a single celled animal.

Even the word 'neuron' for an artificial NN is misleading.  Those articles you cited were written by people who have zero understanding of neuroscience or even elementary biology.

Please remember that nematode C. elegans.  Its 302 neurons are the equivalent of 302 computing chips -- not 302 switches.

John

Anatoly Levenchuk

unread,
Jul 26, 2023, 6:52:08 PM7/26/23
to ontolo...@googlegroups.com

John,

You need not too many artificial neurons to have learned behavior. Here is simulation features twenty cars, each operating as a separate neural network. Remarkably, these neural networks consist of only fifty neurons, making efficient use of resources. The neural networks are trained from scratch, without any pre-training -- https://artificialneuralcomputing.com/cars (and paper about it in the end of the page. Those “invariants” from the paper for me is reference to ontology of driving domain that expressed in slightly non-usual way, with mathematic formulae. But you need relax you definition of ontology to count this invariants as valid ontology for self-driving. See paper in the end of the cars page, it is interesting how many participants of ontolog forum can see ontology work in it. I definitely see ontology engineering there, defining of 4 “invariants”).

---------------
Nature is very inefficient at a Darwinian evolution. Very. Inefficient.

Engineers surely can do better with the help of computers. Therefore we will be careful about assessment of contemporary cognitive architectures that based on neural networks with trillions weights (GPT-4 have
1.8 trillion parameters spread across 120 layers, https://twitter.com/AlexAIDaily/status/1678748352740052992). Now is race for optimization of this huge amount of weights, expected in a year or two at least x10 less for comparable performance.

Best regards,
Anatoly

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of John F Sowa
Sent: Thursday, July 27, 2023 1:17 AM
To: ontolo...@googlegroups.com

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Anatoly Levenchuk

unread,
Jul 26, 2023, 7:05:48 PM7/26/23
to ontolo...@googlegroups.com

John,


> Those articles you cited were written by people who have zero understanding of neuroscience or even elementary biology.


John, these people have not a bad Hirsh to have a brief assessment of their understanding of neuroscience. E.g. Karl Friston have h-index 266 in Google Scholar, https://scholar.google.ru/citations?user=q_4u0aoAAAAJ&hl=en&oi=ao

If you need to fly 300 humans simultaneously, you need not have wing from a bird with all it redundancy. You need only wing of a aeroplane. If you have to reason like 300 educated humans simultaneously you not need brains of these humans, you need hardware without redundancy of biological cells. Thus complexity of hardware platform of wet spike neural network is irrelevant.

 

Best regards,
Anatoly

 

 

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 26, 2023, 11:33:04 PM7/26/23
to ontolo...@googlegroups.com
Kingsley,

I basically agree with your comments below..   And I want to emphasize  that we have to distinguish the tensor operations from the collection of tensors that belong to the language model (the LLMs)..

Methods for processing the LLMs may use tensors or they may use other techniques.  But any tensors  that are outside of the language model, should not be called LLMs.

Furthermore, the human brain does not use tensor calculus, and there is nothing in the brain that resembles an LLM or any operations on LLMs.  When we compare the computational methods to thuman language and thought, we must never claim that the human bran uses the same computational methods as any software or hardware we are using.

John
 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 27, 2023, 12:33:29 AM7/27/23
to ontolo...@googlegroups.com
Anatoly,

The neuroscienttists you cited are very precise about the terminology they use.  They never claim that the human brain uses tensor calculus.

Anatoly> John, these people have not a bad Hirsh(?) to have a brief assessment of their understanding of neuroscience. E.g. Karl Friston have h-index 266 in Google Scholar, https://scholar.google.ru/citations?user=q_4u0aoAAAAJ&hl=en&oi=ao 

I clicked on that link, and that author is a professional neurosciientist who uses his terminology very precisely.  He would never confuse any process in the brain with any operation on tensors that represent LLMs.  I strongly recommend that you follow his practice..

Please read my recent reply to Kingsley.  There I proposed ground rules for keeping our terminology clear nd precise. 

 1.  LLMs are tensors.  There is no tensor calculus in the brain.  Therefore, there is not and cannot be any LLMs in the brain.

 2. Artificial NNs consist of networks of rather simple nodes that compute a numeric function of their input arcs, whose result is sent on their output arcs..

 3. the neurons of humans and other animals are cells that have a huge amount of complexity.   Their operations are vastly more complex than any artificial NNs.  It was a serious mistake to use the term 'neural' for the artificial ones,, and we should never confuse them.

If we abide by these rules, we can have a serious discussion of the issues.  Otherwise, all we have is mush.

Sometimes, we might want to compare some results of an AI computation with some results of human thinking.  But we have to make sure that we are very clear about using the two distinct kinds of terminology to describe what the computer does and what the human does.

John
 


From: "Anatoly Levenchuk" <ai...@asmp.msk.su>

Alex Shkotin

unread,
Jul 27, 2023, 5:16:50 AM7/27/23
to ontolog...@googlegroups.com, ontolog-forum
Ram,

I am sure the term "hallucinate" has a rigorous definition in the domain of LLM science and technology. i.e. it is possible to write an algorithm having LLM behavior as an input and returning one of three sentences:
It hallucinates. 
It does not hallucinate. 
I don't know.

Alex

ср, 26 июл. 2023 г. в 20:49, 'Sriram, Ram D. (Fed)' via ontology-summit <ontolog...@googlegroups.com>:
Alex:

I presume they (LLMs) hallucinate.

Ram

Ram D. Sriram, Ph.D.
Chief, Software and Systems Division
Information Technology Laboratory
National Institute of Standards and Technology
Gaithersburg, MD 20899
Tel. No: 301-975-3507

From: ontolog...@googlegroups.com <ontolog...@googlegroups.com> on behalf of Alex Shkotin <alex.s...@gmail.com>
Sent: Wednesday, July 26, 2023 1:15 PM
To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
Cc: ontolog...@googlegroups.com <ontolog...@googlegroups.com>
Subject: [Ontology Summit] Re: [ontolog-forum] The Oxford Handbook of Lying
 
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/CAFxxROQR1-2UdJUM6g1qFQw0yO7jQ1gwQc12n9aXfWFYTrv98Q%40mail.gmail.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/BLAPR09MB63374BD79291001CD5B4C694EC00A%40BLAPR09MB6337.namprd09.prod.outlook.com.

Alex Shkotin

unread,
Jul 27, 2023, 5:40:59 AM7/27/23
to ontolo...@googlegroups.com
JFS: "Alex>  LLM doesn't lie, it makes mistakes. 

No. LLMs are just data represented by tensor calculus.  The tensor operations map one string of symbols to another string of symbols. The simplest and most reliable methods use the LLMs to do machine translation.  But it's possible to do many other kinds of operations. It's essential to be very clear about what exactly is going on in the computation."

John,

The way LLM makes mistakes is absolutely the same as we talked about software, as one way or another LLM itself is a finite automaton in the form of a network of thousands of very simple processors (named neurons for public).
Of course it is more correct to say that LLM as a code has mistakes in it, but I have no idea how to fix these bugs. Well maybe it's a feature.
When we talk with AI-chat like with AGI we still think it is wrong, not lying.
But it is well known that the domain of mistakes is much wider than the domain of lying :-)

The funny thing is that LLM is a huge code with absence of global variables and local variables are rational numbers. i.e. it is a program without external and internal memory - just code.
And research pdf is here https://arxiv.org/pdf/2301.04589.pdf "Memory Augmented Large Language Models are Computationally Universal"
Abstract
We show that transformer-based large language models are computationally universal when augmented with an external memory. Any deterministic language model that conditions on strings of bounded length is equivalent to a finite automaton, hence computationally limited. However, augmenting such models with a read-write memory creates the possibility of processing arbitrarily large  inputs and, potentially, simulating any algorithm. We establish that an existing large language model, Flan-U-PaLM 540B, can be combined with an associative read-write memory to exactly simulate the execution of a universal Turing machine, U15,2. A key aspect of the finding is that it does not require any modification of the language model weights. Instead, the construction relies solely on designing a form of stored instruction computer that can subsequently be programmed with a specific set of prompts.

For me the best way to talk about LLM etc. is to talk about a kind of code :-)

Alex



ср, 26 июл. 2023 г. в 21:14, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 27, 2023, 7:03:03 AM7/27/23
to ontolo...@googlegroups.com
Kingley,

When Claude 2 or any other LLM++ system can give proper links, it will be an invaluable tool:-) 
Thanks for your input  here.

Alex

ср, 26 июл. 2023 г. в 21:44, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages