ChatGPT Isn't 'Hallucinating' - It's Bullshitting!

0 views
Skip to first unread message

Bill Totten

unread,
Jul 23, 2024, 9:02:52 PM (3 days ago) Jul 23
to GoogleGroups
ChatGPT Isn't 'Hallucinating' - It's Bullshitting!

It's important that we use accurate terminology when discussing how AI chatbots make up information.

by Joe Slater, James Humphries & Michael Townsen Hicks

https://www.scientificamerican.com (July 17 2024)


Alt Robot with bullhorn and fingers crossed behind back
Malte Mueller/Getty Images

Right now artificial intelligence is everywhere. When you write a document, you'll probably be asked whether you need your "AI assistant". Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you're probably familiar with a certain problem - it makes stuff up, causing people to view things it says with suspicion.

It has become common to describe these errors as "hallucinations". But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.

We don't say this lightly. Among philosophers, "bullshit" has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they're not telling the truth, but they're also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don't care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.

We can easily see why this is true and why it matters. Last year, for example, one lawyer found himself in hot water when he used ChatGPT in his research while writing a legal brief. Unfortunately, ChatGPT had included fictitious case citations. The cases it cited simply did not exist.

This isn't rare or anomalous. To understand why, it's worth thinking a bit about how these programs work. OpenAI's ChatGPT, Google's Gemini chatbot, and Meta's Llama all work in structurally similar ways. At their core is an LLM - a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its "training data"). In ChatGPT's case, the initial training data included billions of pages of text from the Internet.

From those training data, the LLM predicts, from some text fragment or prompt, what should come next. It will arrive at a list of the most likely words (technically, linguistic tokens) to come next, then select one of the leading candidates. Allowing for it not to choose the most likely word each time allows for more creative (and more human-sounding) language. The parameter that sets how much deviation is permitted is known as the "temperature". Later in the process, human trainers refine predictions by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as ChatGPT saying racist things), but this token-by-token prediction is the idea that underlies all of this technology.

Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think that the outputs are connected to any sort of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true, which is why we strongly doubt an LLM really understands what it says.

So sometimes ChatGPT says false things. In recent years, as we have been becoming accustomed to AI, people have started to refer to these falsehoods as "AI hallucinations". While this language is metaphorical, we think it's not a good metaphor.

Consider Shakespeare's paradigmatic hallucination in which Macbeth sees a dagger floating toward him. What's going on here? Macbeth is trying to use his perceptual capacities in his normal way, but something has gone wrong. And his perceptual capacities are almost always reliable - he doesn't usually see daggers randomly floating about! Normally his vision is useful in representing the world, and it is good at this because of its connection to the world.

Now think about ChatGPT. Whenever it says anything, it is simply trying to produce humanlike text. The goal is simply to make something that sounds good. This is never directly tied to the world. When it goes wrong, it isn't because it hasn't succeeded in representing the world this time; it never tries to represent the world! Calling its falsehoods "hallucinations" doesn't capture this feature.

Instead we suggest, in a June report in Ethics and Information Technology, that a better term is "bullshit". As mentioned, a bullshitter just doesn't care whether what they say is true.

So if we do regard ChatGPT as engaging in a conversation with us - though even this might be a bit of a pretense - then it seems to fit the bill. As much as it intends to do anything, it intends to produce convincing humanlike text. It isn't trying to say things about the world. It's just bullshitting. And crucially, it's bullshitting even when it says true things!

Why does this matter? Isn't "hallucination" just a nice metaphor here? Does it really matter if it's not apt? We think it does matter for at least three reasons:

First, the terminology we use affects public understanding of technology, which is important in itself. If we use misleading terms, people are more likely to misconstrue how the technology works. We think this in itself is a bad thing.

Second, how we describe technology affects our relationship with that technology and how we think about it. And this can be harmful. Consider people who have been lulled into a false of security by "self-driving" cars. We worry that talking of AI "hallucinating" - a term usually used for human psychology - risks anthropomorphizing the chatbots. The ELIZA effect (named after a chatbot from the 1960s) occurs when people attribute human features to computer programs. We saw this in extremis in the case of the Google employee who came to believe that one of the company's chatbots was sentient. Describing ChatGPT as a bullshit machine (even if it’s a very impressive one) helps mitigate this risk.

Third, if we attribute agency to the programs, this may shift blame away from those using ChatGPT, or its programmers, when things go wrong. If, as appears to be the case, this kind of technology will increasingly be used in important matters such as health care, it is crucial that we know who is responsible when things go wrong.

So next time you see someone describing an AI making something up as a "hallucination", call bullshit!

_____

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

RIGHTS & PERMISSIONS: https://archive.md/o/nMUTI/https://s100.copyright.com/AppDispatchServlet?publisherName=sciam&publication=sciam&title=ChatGPT+Isn%E2%80%99t+%E2%80%98Hallucinating%E2%80%99%E2%80%94It%E2%80%99s+Bullshitting!&publicationDate=2024-07-17&contentID=hJCfreLIkSqpcw40tdeFb&orderBeanReset=true&author=Joe+Slater,+James+Humphries,+Michael+Townsen+Hicks&copyright=Copyright+2024+Scientific+American,+Inc.

Joe Slater is a lecturer in moral and political philosophy at the University of Glasgow. More by Joe Slater: https://archive.md/o/nMUTI/https://www.scientificamerican.com/author/joe-slater/

James Humphries is a lecturer in political theory at the University of Glasgow. More by James Humphries: https://archive.md/o/nMUTI/https://www.scientificamerican.com/author/james-humphries/

Michael Townsen Hicks is a lecturer in philosophy of science and technology at the University of Glasgow. More by Michael Townsen Hicks: https://archive.md/o/nMUTI/https://www.scientificamerican.com/author/michael-townsen-hicks/

Curated by Our Editors:

* https://archive.md/o/nMUTI/https://www.scientificamerican.com/article/why-almost-everyone-gets-the-monty-hall-probability-puzzle-wrong/

* https://archive.md/o/nMUTI/https://www.scientificamerican.com/article/superhuman-ai-bots-are-surprisingly-vulnerable-to-exploits/

* https://archive.md/o/nMUTI/https://www.scientificamerican.com/article/inside-the-ai-competition-that-decoded-an-ancient-scroll-and-changed/

* https://archive.md/o/nMUTI/https://www.scientificamerican.com/article/the-nash-equilibrium-is-the-optimal-poker-strategy-expert-players-dont-always-use-it/

Links: The original version of this article, at the URL below, contains many links to further information not included here:

https://archive.md/nMUTI#selection-205.7-638.0


TO POST A COMMENT, OR TO READ COMMENTS POSTED BY OTHERS, please click the appropriate link at the top or bottom of
Reply all
Reply to author
Forward
0 new messages