Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Critics Say Apple Built a 'Backdoor' Into Your iPhone With Its New Child Abuse Detection Tools

3 views
Skip to first unread message

Leroy N. Soetoro

unread,
Sep 3, 2021, 2:46:37 PM9/3/21
to
Privacy advocates worry the new features could be a slippery slope.

https://gizmodo.com/critics-say-apple-built-a-backdoor-into-your-iphone-
wit-1847438624

Apple’s plans to roll out new features aimed at combating Child Sexual
Abuse Material (CSAM) on its platforms have caused no small amount of
controversy.

The company is basically trying to a pioneer a solution to a problem that,
in recent years, has stymied law enforcement officials and technology
companies alike: the large, ongoing crisis of CSAM proliferation on major
internet platforms. As recently as 2018, tech firms reported the existence
of as many as 45 million photos and videos that constituted child sex
abuse material—a terrifyingly high number.

Yet while this crisis is very real, critics fear that Apple’s new
features—which involve algorithmic scanning of users’ devices and
messages—constitute a privacy violation and, more worryingly, could one
day be repurposed to search for different kinds of material other than
CSAM. Such a shift could open the door to new forms of widespread
surveillance and serve as a potential workaround for encrypted
communications—one of privacy’s last, best hopes.

To understand these concerns, we should take a quick look at the specifics
of the proposed changes. First, the company will be rolling out a new tool
to scan photos uploaded to iCloud from Apple devices in an effort to
search for signs of child sex abuse material. According to a technical
paper published by Apple, the new feature uses a “neural matching
function,” called NeuralHash, to assess whether images on a user’s iPhone
match known “hashes,” or unique digital fingerprints, of CSAM. It does
this by comparing the images shared with iCloud to a large database of
CSAM imagery that has been compiled by the National Center for Missing and
Exploited Children (NCMEC). If enough images are discovered, they are then
flagged for a review by human operators, who then alert NCMEC (who then
presumably tip off the FBI).

Some people have expressed concerns that their phones may contain pictures
of their own children in a bathtub or running naked through a sprinkler or
something like that. But, according to Apple, you don’t have to worry
about that. The company has stressed that it does not “learn anything
about images that do not match [those in] the known CSAM database”—so it’s
not just rifling through your photo albums, looking at whatever it wants.

Meanwhile, Apple will also be rolling out a new iMessage feature designed
to “warn children and their parents when [a child is] receiving or sending
sexually explicit photos.” Specifically, the feature is built to caution
children when they are about to send or receive an image that the
company’s algorithm has deemed sexually explicit. The child gets a
notification, explaining to them that they are about to look at a sexual
image and assuring them that it is OK not to look at the photo (the
incoming image remains blurred until the user consents to viewing it). If
a child under 13 breezes past that notification to send or receive the
image, a notification will subsequently be sent to the child’s parent
alerting them about the incident.

Suffice it to say, news of both of these updates—which will be commencing
later this year with the release of the iOS 15 and iPadOS 15—has not been
met kindly by civil liberties advocates. The concerns may vary, but in
essence, critics worry the deployment of such powerful new technology
presents a number of privacy hazards.

In terms of the iMessage update, concerns are based around how encryption
works, the protection it is supposed to provide, and what the update does
to basically circumvent that protection. Encryption protects the contents
of a user’s message by scrambling it into unreadable cryptographic
signatures before it is sent, essentially nullifying the point of
intercepting the message because it’s unreadable. However, because of the
way Apple’s new feature is set up, communications with child accounts will
be scanned to look for sexually explicit material before a message is
encrypted. Again, this doesn’t mean that Apple has free rein to read a
child’s text messages—it’s just looking for what its algorithm considers
to be inappropriate images.

However, the precedent set by such a shift is potentially worrying. In a
statement published Thursday, the Center for Democracy and Technology took
aim at the iMessage update, calling it an erosion of the privacy provided
by Apple’s end-to-end encryption: “The mechanism that will enable Apple to
scan images in iMessages is not an alternative to a backdoor—it is a
backdoor,” the Center said. “Client-side scanning on one ‘end’ of the
communication breaks the security of the transmission, and informing a
third-party (the parent) about the content of the communication undermines
its privacy.”

The plan to scan iCloud uploads has similarly riled privacy advocates.
Jennifer Granick, surveillance and cybersecurity counsel for the ACLU’s
Speech, Privacy, and Technology Project, told Gizmodo via email that she
is concerned about the potential implications of the photo scans: “However
altruistic its motives, Apple has built an infrastructure that could be
subverted for widespread surveillance of the conversations and information
we keep on our phones,” she said. “The CSAM scanning capability could be
repurposed for censorship or for identification and reporting of content
that is not illegal depending on what hashes the company decides to, or is
forced to, include in the matching database. For this and other reasons,
it is also susceptible to abuse by autocrats abroad, by overzealous
government officials at home, or even by the company itself.”

Even Edward Snowden chimed in:


Edward Snowden
@Snowden
No matter how well-intentioned, @Apple is rolling out mass surveillance to
the entire world with this. Make no mistake: if they can scan for kiddie
porn today, they can scan for anything tomorrow.

They turned a trillion dollars of devices into iNarcs—*without asking.*
Edward Snowden
@Snowden
???? Apple says to "protect children," they're updating every iPhone to
continuously compare your photos and cloud storage against a secret
blacklist. If it finds a hit, they call the cops.

iOS will also tell your parents if you view a nude in iMessage.

https://eff.org/deeplinks/2021/08/apples-plan-think-different-about-
encryption-opens-backdoor-your-private-life
7:23 PM · Aug 5, 2021
24.3K
865
Share this Tweet

The concern here obviously isn’t Apple’s mission to fight CSAM, it’s the
tools that it’s using to do so—which critics fear represent a slippery
slope. In an article published Thursday, the privacy-focused Electronic
Frontier Foundation noted that scanning capabilities similar to Apple’s
tools could eventually be repurposed to make its algorithms hunt for other
kinds of images or text—which would basically mean a workaround for
encrypted communications, one designed to police private interactions and
personal content. According to the EFF:

All it would take to widen the narrow backdoor that Apple is building is
an expansion of the machine learning parameters to look for additional
types of content, or a tweak of the configuration flags to scan, not just
children’s, but anyone’s accounts. That’s not a slippery slope; that’s a
fully built system just waiting for external pressure to make the
slightest change.

Such concerns become especially germane when it comes to the features’
rollout in other countries—with some critics warning that Apple’s tools
could be abused and subverted by corrupt foreign governments. In response
to these concerns, Apple confirmed to MacRumors on Friday that it plans to
expand the features on a country-by-country basis. When it does consider
distribution in a given country, it will do a legal evaluation beforehand,
the outlet reported.

In a phone call with Gizmodo Friday, India McKinney, director of federal
affairs for EFF, raised another concern: the fact that both tools are un-
auditable means that it’s impossible to independently verify that they are
working the way they’re supposed to be working.

“There is no way for outside groups like ours or anybody
else—researchers—to look under the hood to see how well it’s working, is
it accurate, is this doing what its supposed to be doing, how many false-
positives are there,” she said. “Once they roll this system out and start
pushing it onto the phones, who’s to say they’re not going to respond to
government pressure to start including other things—terrorism content,
memes that depict political leaders in unflattering ways, all sorts of
other stuff.” Relevantly, in its article on Thursday, EFF noted that one
of the technologies “originally built to scan and hash child sexual abuse
imagery” was recently retooled to create a database run by the Global
Internet Forum to Counter Terrorism (GIFCT)—the likes of which now helps
online platforms to search for and moderate/ban “terrorist” content,
centered around violence and extremism.

Because of all these concerns, a cadre of privacy advocates and security
experts have written an open letter to Apple, asking that the company
reconsider its new features. As of Sunday, the letter had over 5,000
signatures.

However, it’s unclear whether any of this will have an impact on the tech
giant’s plans. In an internal company memo leaked Friday, Apple’s software
VP Sebastien Marineau-Mes acknowledged that “some people have
misunderstandings and more than a few are worried about the implications”
of the new rollout, but that the company will “continue to explain and
detail the features so people understand what we’ve built.” Meanwhile,
NMCEC sent a letter to Apple staff internally in which they referred to
the program’s critics as “the screeching voices of the minority” and
championed Apple for its efforts.



--
"LOCKDOWN", left-wing COVID fearmongering. 95% of COVID infections
recover with no after effects.

No collusion - Special Counsel Robert Swan Mueller III, March 2019.
Officially made Nancy Pelosi a two-time impeachment loser.

Donald J. Trump, cheated out of a second term by fraudulent "mail-in"
ballots. Report voter fraud: sf.n...@mail.house.gov

Thank you for cleaning up the disaster of the 2008-2017 Obama / Biden
fiasco, President Trump.

Under Barack Obama's leadership, the United States of America became the
The World According To Garp. Obama sold out heterosexuals for Hollywood
queer liberal democrat donors.

President Trump boosted the economy, reduced illegal invasions, appointed
dozens of judges and three SCOTUS justices.

Jolly Roger

unread,
Sep 3, 2021, 5:43:24 PM9/3/21
to
On 2021-09-03, Leroy N. Soetoro <democrat-...@mail.house.gov> wrote:
> Privacy advocates worry the new features could be a slippery slope.

Yet another Arlen nym. Ho hum.

--
E-mail sent to this address may be devoured by my ravenous SPAM filter.
I often ignore posts from Google. Use a real news client instead.

JR

Robin Goodfellow

unread,
Sep 3, 2021, 11:43:33 PM9/3/21
to
Jolly Roger <jolly...@pobox.com> asked
> Yet another Arlen nym. Ho hum.

I find it revealing that this moron Jolly Roger has no idea who the OP is,
given the Wikipedia articles explaining how anonymous remailers work
(but these Apple apologist morons like Jolly Roger don't know how to read).

BTW, on the topic of CSAM... pretty much Apple is openly saying:
"*We normally fool everyone; but not this time*."

"So let's pause so we can work on how to fool them better."

https://www.macrumors.com/2021/08/13/federighi-confusion-around-child-safety-details/
https://www.theverge.com/2021/8/13/22623336/craig-federighi-apple-icloud-photos-iphone-ipad-csam-scanning-auditability
--
The apologists always accuse everyone else of being what _they_ are!

Jolly Roger

unread,
Sep 4, 2021, 11:49:05 AM9/4/21
to
On 2021-09-04, Robin Goodfellow <Ancient...@Heaven.Net> wrote:
> Jolly Roger <jolly...@pobox.com> asked
>> Yet another Arlen nym. Ho hum.
>
> I find it revealing that this moron

Arlen got #Triggered : )
0 new messages