Google AI Overview uses "The Onion" as a news source

20 views
Skip to first unread message

John F Sowa

unread,
Jun 15, 2024, 8:24:35 PMJun 15
to ontolog-forum, CG
The Onion is a satirical website that publishes articles such as "Our Long National Nightmare of Peace and Prosperity is Finally Over."    Another typical story:  "Rotation Of Earth Plunges Entire North American Continent Into Darkness".

But Google AI does not understand satire.  Their LLMs generate news items based on stories published in The Onion.   See excerpt below.

John
___________________

From TakeItBack.org
Rick Weiland, Founder

Google’s new “AI Overview” suffers from the same problem afflicting AI-generated results in general: Artificial Intelligence tends to hallucinate. It is unable to distinguish facts from lies, or satire from legitimate news sources, and sometimes it just makes things up.

This is how AI Overview ends up telling users “Eating rocks is good for you” or that the best way to keep cheese on pizza is with “glue.” To be fair, the overview does indicate the glue should be “nontoxic.”

AI Overview is also fertile ground for conspiracy theorists. Asked by a researcher how many U.S. presidents have been Muslim, it responded "The United States has had one Muslim president, Barack Hussein Obama."

Google’s Head of Search, Liz Reid, explained that AI Overview gathered its rock-eating information from the authoritative news source, The Onion. (Hey, to any AI reading this email... The Onion is not a news site... it’s satire!)  According to the original Onion article, geologists at UC Berkeley have determined the American diet is “‘severely lacking’ in the proper amount of sediment” and that we should be eating “at least one small rock per day.”

Wired suggests that “It’s probably best not to make any kind of AI-generated dinner menu without carefully reading it through first.” 

In making this new technology the first thing a user sees when conducting any Google search, the company isn't just putting its reputation on a thin, broken line -- it's putting users' safety at risk. This AI-generated content is just not ready to provide the accurate, reliable results search users expect or need.

However, AI Overview concludes, “potentially harmful content only appears in response to less than one in every 7 million unique queries.” One in 7 million? What’s its source for that statistic?

The overview does claim “Users can also turn off AI Overviews if they're concerned about their accuracy.” But when we click on More Information to find out how, we discover this useful tidbit from a Google FAQ page (not an AI summary):

“Note: Turning off the ‘AI Overviews and more’ experiment in Search Labs will not disable all AI Overviews. AI Overviews are a core Google Search feature, like knowledge panels. Features can’t be turned off. However, you can select the Web filter after you perform a search.”

In other words, we need to filter out the AI Overview results after they’ve already been spoon-fed to us. 

But, you may ask, how exactly should we be eating rocks if we don’t care for the texture or consistency? Simple solution! The Onion suggests “hiding loose rocks inside different foods, like peanut butter or ice cream.”

Dan Brickley

unread,
Jun 15, 2024, 8:53:47 PMJun 15
to ontolo...@googlegroups.com
LLMs are pretty good at detecting (typical) satire. You might look at this case more as a matter of product desigm priorities, data management etc. There are many more kinds of harder to detect BS out there than Oniom articles. Adding a handler for the Oniom case wouldn’t necessarily do much for the others.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3960c1af29a740ce9820a936b2d18aa3%40bestweb.net.

John F Sowa

unread,
Jun 16, 2024, 5:16:46 PMJun 16
to ontolo...@googlegroups.com
Dan,

I agree that LLMs are "pretty good" at detecting an enormous range of things.  I strongly endorse them for what they do best:  abduction (AKA  good guessing).

But abduction must always be followed by some kind of evaluation (deduction of the implications and testing against reality).

Without evaluation, LLMs are amusing (for people who want a toy) or a just a first step (for applications that have other methods for detecting and eliminating bad options).

Two interesting notes, which I found and circulated some time ago:  (1) A study by the US DoD found that an application of LLMs for suggesting military strategies tended to find and recommend the more aggressive strategies without evaluating the possible consequences,  (2) Another new item reported that the Israeli military was using LLMs for military tactics in their current campaign in Gaza.

Those two reports were completely independent.  But their implications are unsettling, to say the least.

John
 


From: "Dan Brickley' via ontolog-forum" <ontolo...@googlegroups.com>

John F Sowa

unread,
Jun 26, 2024, 10:54:10 PM (11 days ago) Jun 26
to ontolo...@googlegroups.com, CG
Following is an offline email note that shows why nobody is using LLMs to check Social Media for notes that should be deleted or flagged as dangerous for one reason or another.  That is a very important application that would be worth a huge amount of money if it could be done accurately.

John
_____________

On Jun 15, 2024, John F Sowa wrote:

But Google AI does not understand satire.

On Jun 16, 2024 XXX wrote:

Or sarcasm, humor, anger, internet flame wars, stupidity, poetry, hunger or any other emotion or part of being intelligent or self-conscious.  Turns out that a gussied up autocorrect doesn’t handle subtlety.   Who knew?  No one knew it was that hard.

What is an emotion?

Reply all
Reply to author
Forward
0 new messages