Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Risks Digest 33.71

Skip to first unread message

RISKS List Owner

May 16, 2023, 7:34:05 PM5/16/23
RISKS-LIST: Risks-Forum Digest Tuesday 16 May 2023 Volume 33 : Issue 71

Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <> as
The current issue can also be found at

Your DNA Can Now Be Pulled From Thin Air. Privacy Experts Are Worried
(Elizabeth Anne Brown)
An EFF Investigation: Mystery GPS Tracker On A Supporter's Car (via GG)
*Philadelphia Inquirer* hack prevents printing the Sunday paper (Sundry)
CEO of OpenAI calls for US to regulate artificial intelligence (Sam Altman)
ChatGPT Is a Blurry JPEG of the Web (The New Yorker)
Cybersecurity faces a challenge from AI'S rise (MSN)
Entering the singularity: Has AI reached the point of no return? (The Hill)
Research finds AI assistants may be able to influence users without
them being aware, akin to humans swaying each other through collaboration
and social norms (WSJ)
Rip and Replace: The Tech Cold War Is Upending Wireless Carriers (NYTimes)
Vice Media Group files for bankruptcy protection (Matthew Kruk)
Re: Near collision embarrasses Navy, so they order public San Diego webcams
taken down (Steve Bacher)
Re: Three Companies Supplied Fake Comments to FCC (Steve Bacher)
Interfaces: The Dangers of Ethical AI in Healthcare (S. Scott Graham)
Abridged info on RISKS (comp.risks)


Date: Tue, 16 May 2023 04:50:48 -0700
From: geoff goodfellow <>
Subject: Your DNA Can Now Be Pulled From Thin Air. Privacy Experts Are
Worried (Elizabeth Anne Brown)

[Today's *The New York Times* Science Times, A DNA Quandary,
Elizabeth Anne Brown, 16 May 2023: Tiny bits of genetic material
that humans leave everywhere can be collected and analyzed, raising
ethical concerns about privacy and civil liberties. PGN]

David Duffy, a wildlife geneticist at the University of Florida, just wanted
a better way to track disease in sea turtles. Then he started finding human
DNA everywhere he looked.

Over the last decade, wildlife researchers have refined techniques for
recovering environmental DNA, or eDNA -- trace amounts of genetic material
that all living things leave behind. A powerful and inexpensive tool for
ecologists, eDNA is all over -- floating in the air, or lingering in water,
snow, honey and even your cup of tea. Researchers have used the method to
detect invasive species before they take over, to track vulnerable or
secretive wildlife populations and even to rediscover species thought to be
The eDNA technology is also used in wastewater surveillance systems
to monitor Covid and other pathogens.

But all along, scientists using eDNA were quietly recovering gobs and gobs
of human DNA. To them, it's pollution, a sort of human genomic by-catch
muddying their data. But what if someone set out to collect human eDNA on

New DNA collecting techniques are like catnip for law enforcement
officials, says Erin Murphy.
a law professor at the New York University School of Law who
specializes in the use of new technologies in the criminal legal
system. The police have been quick to embrace unproven tools, like
using DNA to create probability-based sketches of a suspect.

That could pose dilemmas for the preservation of privacy and civil
liberties, especially as technological advancement allows more information
to be gathered from ever smaller eDNA samples. Dr. Duffy and his colleagues
used a readily available and affordable technology to see how much
information they could glean from human DNA gathered from the environment in
a variety of circumstances, such as from outdoor waterways and the air
inside a building.

The results of their research, published Monday in the journal Nature
Ecology & Evolution <>,
<> demonstrate that
scientists can recover medical and ancestry information from minute
fragments of human DNA lingering in the environment.

Forensic ethicists and legal scholars say the Florida team's findings
increase the urgency for comprehensive genetic privacy regulations. For
researchers, it also highlights an imbalance in rules around such techniques
in the United States -- that it's easier for law enforcement officials to
deploy a half-baked new technology than it is for scientific researchers to
get approval for studies to confirm that the system even works. Genetic
trash to genetic treasure. [...]

[Captured in a crowd, someone else's DNA might easily be associated with
you, mistakenly. It seems as if this has plenty of other risks as well.


Date: Sun, 14 May 2023 02:51:46 -0400
From: "Gabe Goldberg" <>
Subject: An EFF Investigation: Mystery GPS Tracker On A Supporter's Car
(Electronic Frontier Foundation)

Being able to accurately determine your location anywhere on the planet is a
useful technological trick. But when tracking isn't done by you, but to you
-- without your knowledge or consent -- it's a violation of your
privacy. That's why at EFF we've long fought against dragnet surveillance,
mobile device tracking, and warrantless GPS tracking.

Several weeks ago, an EFF supporter brought her car to a mechanic, and found
a mysterious device wired into her car under her driver's seat. This
supporter, who we'll call Sarah (not her real name), sent us an email asking
if we could determine whether this device was a GPS tracker, and if so, who
might have installed it. Confronted with a mystery that could also help us
learn more about tracking, our team got to work.


Date: Tue, 16 May 2023 10:33:50 PDT
From: Peter G Neumann <>
Subject: *Philadelphia Inquirer* hack prevents printing the Sunday paper

[Thanks to Arik Hesseldahl for reporting this item.]

[Monty Solomon noted another take on this:
Possible Cyberattack Disrupts The Philadelphia Inquirer


Date: Tue, 16 May 2023 13:40:28 -0600
From: Matthew Kruk <>
Subject: CEO of OpenAI calls for US to regulate artificial intelligence
(Sam Altman)

*The creator of advanced chatbot ChatGPT has called on US lawmakers to
regulate artificial intelligence (AI). *

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before
a U.S. Senate committee on Tuesday about the possibilities -- and pitfalls
-- of the new technology.


From: Amos Shapir <>
Date: Tue, 16 May 2023 17:19:11 +0300
Subject: ChatGPT Is a Blurry JPEG of the Web (The New Yorker)

How ChatGPT is like lossy compression, and a cautionary tale of a famous
old bug of Xerox copying machines.

``Think of ChatGPT as a blurry *JPEG* of all the text on the Web. It
retains much of the information on the Web, in the same way that a *JPEG*
retains much of the information of a higher-resolution image, but, if
you're looking for an exact sequence of bits, you won't find it.''

More at (may be paywalled):


Date: Sat, 13 May 2023 09:43:26 -0700
From: geoff goodfellow <>
Subject: Cybersecurity faces a challenge from AI'S rise (MSN)

Earlier this year, a sales director in India for tech security firm Zscaler
got a call that seemed to be from the company's chief executive.

As his cellphone displayed founder Jay Chaudhry's picture, a familiar voice
said ``Hi, it's Jay. I need you to do something for me.''

A follow-up text over WhatsApp explained why. ``I think I'm having poor
network coverage as I am traveling at the moment. Is it okay to text here in
the meantime?''

Then the caller asked for assistance moving money to a bank in Singapore.
Trying to help, the salesman went to his manager, who smelled a rat and
turned the matter over to internal investigators. They determined that
scammers had reconstituted Chaudhry' voice from clips of his public remarks
in an attempt to steal from the company.

Chaudhry recounted the incident last month on the sidelines of the annual
RSA cybersecurity conference in San Francisco, where concerns about the
revolution in artificial intelligence dominated the conversation.

Criminals have been early adopters, with Zscaler citing AI as a factor in
the 47 percent surge in phishing attacks it saw last year. Crooks are
automating more personalized texts and scripted voice recordings while
dodging alarms by going through such unmonitored channels as encrypted
WhatsApp messages on personal cellphones. Translations to the target
language are getting better, and disinformation is harder to spot, security
researchers said.

That is just the beginning, experts, executives and government officials
fear, as attackers use artificial intelligence to write software that can
break into corporate networks in novel ways, change appearance and
functionality to beat detection, and smuggle data back out through
processes that appear normal.
`It is going to help rewrite code,'' National Security Agency cybersecurity
chief Rob Joyce warned the conference. ``Adversaries who put in work now
will outperform those who don't.

The result will be more believable scams, smarter selection of insiders
positioned to make mistakes, and growth in account takeovers and phishing as
a service, where criminals hire specialists skilled at AI. [...]


Date: Tue, 16 May 2023 05:08:02 -0700
From: geoff goodfellow <>
Subject: Entering the singularity: Has AI reached the point of no return?
(The Hill)

The theory of technological singularity predicts a point in time when humans
lose control over their technological inventions and subsequent developments
due to the rise of machine consciousness and, as a result, their superior
intelligence. Reaching singularity stage, in short, constitutes artificial
intelligence's (AI's) greatest threat to humanity. Unfortunately, AI
singularity is already underway.

AI will be effective not only when machines can do what humans do
(replication), but when they can do it better and without human supervision
(adaptation). Reinforcement learning (recognized data leading to predicted
outcomes) and supervised learning algorithms (labeled data leading to
predicted outcomes) have been important to the development of robotics,
digital assistants and search engines. But the future of many industries and
scientific exploration hinges more on the development of unsupervised
learning algorithms (unlabeled data leading to improved outcomes), including
autonomous vehicles, non-invasive medical diagnosis, assisted space
construction, autonomous weapons design, facial-biometric recognition,
remote industrial production and stock market prediction, [...]

[Massive insertion of URLs pruned for RISKS, and truncated the rest of the
piece. PGN]


Date: Sun, 14 May 2023 06:30:43 -0700
From: geoff goodfellow <>
Subject: Research finds AI assistants may be able to influence users without
them being aware, akin to humans swaying each other through collaboration
and social norms (WSJ)

Help! My Political Beliefs Were Altered by a Chatbot!* AI assistants may be
able to change our views without our realizing it. Says one expert: ``What's
interesting here is the subtlety.''

When we ask ChatGPT or another bot to draft a memo, email, or presentation,
we think these artificial-intelligence assistants are doing our bidding. A
growing body of research shows that they also can change our thinking,
without our knowing.

One of the latest studies in this vein, from researchers spread across the
globe, found that when subjects were asked to use an AI to help them write
an essay, that AI could nudge them to write an essay either for or against a
particular view, depending on the bias of the algorithm. Performing this
exercise also measurably influenced the subjects' opinions on the topic,
after the exercise.

``You may not even know that you are being influenced,'' says Mor Naaman, a
professor in the information science department at Cornell University, and
the senior author of the paper. He calls this phenomenon “latent
persuasion.” These studies raise an alarming prospect: As AI makes us more
productive, it may also alter our opinions in subtle and unanticipated
ways. This influence may be more akin to the way humans sway one another
through collaboration and social norms, than to the kind of mass-media and
social media influence peo’re familiar with.

Researchers who have uncovered this phenomenon believe that the best defense
against this new form of psychological influence --indeed, the only one, for
now -- is making more people aware of it. In the long run, other defenses,
such as regulators mandating transparency about how AI algorithms work, and
what human biases they mimic, may be helpful.

All of this could lead to a future in which people choose which AIs they use
-- at work and at home, in the office and in the education of their children
-- based on which human values are expressed in the responses that AI gives.

And some AIs may have different personalities -- including political
persuasions. If you're composing an email to your colleagues at the
environmental not-for-profit where you work, you might use something called,
hypothetically, ProgressiveGPT. Someone else, drafting a missive for their
conservative PAC on social media, might use, say, GOPGPT. Still others might
mix and match traits and viewpoints in their chosen AIs, which could someday
be personalized to convincingly mimic their writing style.

By extension, in the future, companies and other organizations might offer
AIs that are purpose-built, from the ground up, for different tasks.
Someone in sales might use an AI assistant tuned to be more persuasive --
call it SalesGPT. Someone in customer service might use one trained to be
extra polite -- SupportGPT.

*How AIs can change our minds*

Looking at previous research adds nuance to the story of latent persuasion.
One study from 2021 showed that the AI-powered automatic responses that
Google's Gmail suggests -- called smart replies -- —which tend to be quite
positive, influence people to communicate more positively in general. A
second study found that smart replies, which are used billions of times a
day, can influence those who receive such replies to feel the sender is
warmer and more cooperative. [...]


Date: Mon, 15 May 2023 18:38:06 -0400
From: Monty Solomon <>
Subject: Rip and Replace: The Tech Cold War Is Upending Wireless Carriers

As China and the United States jockey for tech primacy, wireless carriers in
dozens of states are tearing out Chinese equipment. That has turned into a
costly, difficult process.


From: Matthew Kruk <>
Date: Mon, 15 May 2023 07:25:54 -0600
Subject: Vice Media Group files for bankruptcy protection

Vice Media Group, popular for websites such as Vice and Motherboard, filed
for bankruptcy protection on Monday to engineer its sale to a group of
lenders, capping years of financial difficulties and top-executive

Vice said that the lender consortium, which includes Fortress Investment
Group, Soros Fund Management and Monroe Capital, will provide about $225
million U.S. in the form of a credit bid for substantially all of the
company's assets and also assume significant liabilities at closing.


Date: Sun, 14 May 2023 10:30:24 -0700
From: Steve Bacher <>
Subject: Re: Near collision embarrasses Navy, so they order public San Diego
webcams taken down (Fox5)

I don't see this as a bad thing. On the one hand you have the proliferation
of livestreaming cameras impacting individual privacy (despite their
disclaimers) and national security. On the other hand, what? There's no
Constitutional right to watch and record everything going on, is there?
It's not necessarily embarrassment that's driving the Navy's decision; more
like this exposure has made them aware that military-related activities are
being broadcast to potentially everybody out there, and therefore they
needed to take action to prevent that.


Date: Sun, 14 May 2023 11:17:23 -0700
From: Steve Bacher <>
Subject: Re: Three Companies Supplied Fake Comments to FCC (NY AG)

Makes me wonder if they would look similarly into whatever John Oliver did
to encourage his Last Week Tonight audience to show their support for net
neutrality back in 2014 and 2017:

I suppose this is not nearly on the same level as what those companies did,


Date: Tue, 16 May 2023 08:52:44 -0600
From: "Charles Babbage Institute" <>
Subject: Interfaces: The Dangers of Ethical AI in Healthcare
(S. Scott Graham)

Abstract: With growing awareness of the dangers of artificial intelligence,
there have been increased efforts to foster more ethical practices. This
essay explores the potential risks of so-called "Ethical AI" in health and
medicine through investigating a system designed to reduce racial
disparities in pain medicine. Ultimately, the essay reflects on how much of
Ethical AI preserves the broader technology sector's bias toward action and
recommends a more precautionary approach to technological intervention in
systemic health inequities. This essay is adapted from Chapter 6 of The
Doctor and the Algorithm: Promise, Peril, and the Future of Health AI
(Oxford University Press, 2022).

[You will have to dig for this one, which I managed to find here:
The URL received was inoperable. PGN]


Date: Mon, 1 Aug 2020 11:11:11 -0800
Subject: Abridged info on RISKS (comp.risks)

The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:

=> SUBMISSIONS: to with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) is online.
*** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES: takes you to Lindsay Marshall's
searchable html archive at newcastle: --> VoLume, ISsue.
Also, for the current volume/previous directories
or for previous VoLume
If none of those work for you, the most recent issue is always at, and index at /risks-33.00
ALTERNATIVE ARCHIVES: (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:


End of RISKS-FORUM Digest 33.71

0 new messages