Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"AI" Overlords Block Wad of Malicious Chinese Accounts - Probably Missed 90%

16 views
Skip to first unread message

68g.1509

unread,
Feb 16, 2024, 1:38:27 AMFeb 16
to
https://www.newsnationnow.com/business/tech/openai-microsoft-cybersecurity-malicious-accounts/?ipid=promo-link-block1

OpenAI and Microsoft Threat Intelligence shut down accounts linked to
five state-affiliated actors, some tied to China and Russia, trying to
use AI for malicious reasons, the companies announced Wednesday.

“We terminated accounts associated with state-affiliated threat actors,”
OpenAI, the creator of ChatGPT, said in an official statement. “Our
findings show our models offer only limited, incremental capabilities
for malicious cybersecurity tasks.”

The accounts terminated included China-affiliated Charcoal
Typhoon and Salmon Typhoon, Iran-affiliated Crimson Sandstorm,
North Korea-affiliated Emerald Sleet and Russia-affiliated
Forest Blizzard, according to OpenAI’s statement.

Microsoft Threat Intelligence tracks more than 300 unique
threat actors, including 160 nation-state actors, and 50
ransomware groups.

. . .

Seems the rate at which their "AI" systems are being used
for EVIL greatly exceeds the "good" uses.

Recent news is that a "text to video" interface now exists,
just DESCRIBE what kind of video, "news", you want and the
AI engine will create it for you. Then you can tweak it to
fool everyone.

"Reality" seems to have dissolved. Testimony used to be
reality, then reported news, then pictures, then video.
NOW - nada. It's all way too easy to fake. Without facts
we cannot chart a course to any viable future. It all
becomes surreal and every step will be wrong.

Yesterday, Musk's AI high-tekkie said the tech should be
just SHELVED for a long time. This is a guy who makes his
money - probably lots of money - from "AI". His main
concern seemed to be military-area abuse ... but that's
only the tip of the proverbial iceberg.

D

unread,
Feb 16, 2024, 4:47:57 AMFeb 16
to
_If_ we're heading for a post-truth world in which we drown in
AI-generated fake photos and videos, I only see one solution (apart from
training an AI to spot the fake material, if that even is possible).

Source control and signing. So every organization would have to sign their
own videos and statements, so that only the one who has the private key is
able to do that.

News organiztions would have to check all videos and sources they use and
become a lot more restrictive with the material they rely on.

This would probably wipe out small independent journalists (or at least
make their lives a lot harder) since no one or few would trust them.

Am I being too negative?

Please add positive scenarios if you disagree.
0 new messages