This is the script for my national radio report yesterday on the
dangers of AI browsers like OpenAI's new "Atlas". As always, there may
have been minor wording variations from this script as I presented it
live on air.
- - -
So the issues surrounding generative artificial intelligence, LLM --
Large Language Model AI, are the stories that just keep on giving, so
long as what you're interested in getting is night after night of
terrible dreams keeping you awake. And it's obvious that the firms
behind so much of this often misinformation-laden, sometimes dangerous
trash, just can't control themselves, because they've bet their
corporate existence on pushing this garbage into everyone and
everything, no matter how much damage it might do. And of course,
they're not interested in taking responsibility for that damage,
typically claiming that the outputs from these AI chatbots and other
LLM AI systems are First Amendment protected free speech and attempts
to control it would disrupt their "Hell Bent for AI" business models.
However, questions we likely need to be asking as a society are
whether perhaps some of these business models SHOULD be disrupted, and
whether the billionaire CEOs running these firms need to be put on
short leashes before they wreck all of our lives in the pursuit of
their often misguided dreams. I don't have time right here to get into
the technical details, but the latest nightmarish fantasy AI brew
attracting attention is OpenAI announcing an AI Browser called
"Atlas". Now you may recall that OpenAI is the firm that is being sued
by parents claiming that the firm's ChatGPT AI chatbot helped their
teenage son commit suicide, which gives some indication of why many
people view the ethical standards of the firm and its leadership in a
negative light.
So this browser turns out to apparently be based on the Chromium Open
Source browser project, something OpenAI reportedly did not mention
when they announced the browser. Chromium as we've discussed in the
past is the basis for a whole array of browsers including Chrome,
Microsoft Edge and others. Anyone can take the Chromium sources and
build from them. And what OpenAI has done, according to researchers
who have been testing it, is to take the useful Chromium foundation
and turn it into an anti-privacy and security horror show.
Keeping in mind that a web browser is the gateway into the vast
majority of most people's interactions with the web other than
specific mobile apps for specific purposes, imagine taking a web
browser and then grafting on the most problematic aspects of ChatGPT.
Imagine creating a browser where your AI overseer is watching your
personal activates on the web and even wants to take over performing
potentially dangerous actions on the web on your behalf -- in YOUR
name. That's the so-called agentic AI that is a major thrust of Big
Tech's accelerating AI hype campaign.
Well, this seems to be how researchers are describing key aspects of
OpenAI's Atlas browser. And there are reportedly all sorts of
technical vulnerabilities being found in Atlas involving prompt
injections and mishandling of malformed URLs and so on, some of which
sound like amateur level design flaws.
So where does this leave us? Various other Big Tech AI firms are
likely working on their own versions of AI browsers that could bring
similar vulnerabilities to even more users and then leave those users
on their own when things go wrong. Of course most people ARE free to
choose the browsers that they prefer to use, at least for their
personal use. But in the very short span of time since Atlas was
announced, so much frankly scary stuff has been discovered by
researchers about its implementation, that one might want to think
long and hard before inviting that kind of technology into your web
browsing.
Because once you give this kind of AI access to your life in these
ways, you might find it difficult or even practically impossible to
ever really get that AI to ever completely leave you alone again. And
that's perhaps the scariest part of all.
- - -
L
- - -
--Lauren--
Lauren Weinstein
lau...@vortex.com (
https://www.vortex.com/lauren)
Lauren's Blog:
https://lauren.vortex.com
Mastodon:
https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad:
https://www.nnsquad.org
PRIVACY Forum:
https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy