Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Apple's iCloud Photos and Messages child safety initiatives - AppleInsider

0 views
Skip to first unread message

Jolly Roger

unread,
Aug 6, 2021, 5:11:00 PM8/6/21
to
What you need to know: Apple's iCloud Photos and Messages child safety
initiatives

<https://appleinsider.com/articles/21/08/06/what-you-need-to-know-apples-icloud-photos-and-messages-child-safety-initiatives>

Apple's recent child safety announcement about iCloud Photos image
assessment and Messages notifications has generated a lot of hot takes,
but many of the arguments are missing context, historical information —
and the fact that Apple's privacy policies aren't changing.

Apple announced a new suite of tools on Thursday, meant to help protect
children online and curb the spread of child sexual abuse material
(CSAM). It included features in iMessage, Siri and Search, and a
mechanism that scans iCloud Photos for known CSAM images.

Cybersecurity, online safety, and privacy experts had mixed reactions to
the announcement. Users did, too. However, many of the arguments are
clearly ignoring how widespread the practice of scanning image databases
for CSAM actually is. They also ignore the fact that Apple is not
abandoning its privacy-protecting technologies.

Here's what you should know.

Apple's privacy features

The company's slate of child protection features includes the
aforementioned iCloud Photos scanning, as well as updated tools and
resources in Siri and Search. It also includes a feature meant to flag
inappropriate images sent via iMessage to or from minors.

As Apple noted in its announcement, all of the features were built with
privacy in mind. Both the iMessage feature and the iCloud Photo
scanning, for example, use on-device intelligence.

Additionally, the iCloud Photos "scanner" isn't actually scanning or
analyzing the images on a user's iPhone. Instead, it's comparing the
mathematical hashes of known CSAM to images stored in iCloud. If a
collection of known CSAM images is stored in iCloud, then the account is
flagged and a report is sent to the National Center for Missing &
Exploited Children (NCMEC).

There are elements to the system to ensure that false positives are
ridiculously rare. Apple says that the chances of a false positive are
one in a trillion. That's because of the aforementioned collection
"threshold," which Apple declined to detail.

Additionally, the scanning only works on iCloud Photos. Images stored
strictly on-device are not scanned, and they can't be examined if iCloud
Photos is turned off.

The Messages system is even more privacy-preserving. It only applies to
accounts belonging to children, and is opt-in, not opt-out. Furthermore,
it doesn't generate any reports to external entities — only the parents
of the children are notified that an inappropriate message has been
received or sent.

There are critical distinctions to be made between the iCloud Photos
scanning and the iMessage feature. Basically, both are completely
unrelated beyond the fact that they're meant to protect children.

* iCloud Photo assessment hashes images without examining for context,
and compares it to known CSAM collections and generates a report to
issue to the hash collection maintainer NCMEC.

* The child account iMessage feature uses on-device machine learning,
does not compare images to CSAM databases, and doesn't send any
reports to Apple — just the parental Family Sharing manager account

With this in mind, let's run through some potential scenarios that users
are concerned about.

iCloud library false positives - or "What if I'm falsely mathematically
matched?"

Apple's system is designed to ensure that false positives are
ridiculously rare. But, for whatever the reason, humans are bad at
risk-reward assessment. As we've already noted, Apple says the odds of a
false match are one-in-a-trillion. One-in-1,000,000,000,000.

In a human's lifetime, there is a 1-in-15,000 chance of getting hit by
lightning, and that's not a general fear. The odds of getting struck by
a meteor over a lifetime is about 1-in-250,000 and that is also not a
daily fear. Any given mega-jackpot lottery is over 1-in-150,000,000 to
win, and millions of dollars get spent on it every day across the US.

Apple's one-in-a-trillion is not what you'd call a big chance of
somebody's life getting ruined by a false positive. And, Apple is not
the judge, jury, and executioner, as some believe.

Should the material be spotted, it will be human-reviewed before being
handed over to law enforcement. And then, and only then, will the
potential match be handed over to law enforcement.

It is then up to them to issue a warrant, as courts do not accept just a
hash-match as acceptable.

A false unlock on your iPhone with Face ID is incredibly more probable
than a SHA-256 collision at 1-in-1,000,000. A bad Touch ID match is even
more probable than that at 1-in-50,000, and again, neither of these are
big concerns of users.

In short, your pictures of your kids in the bath, or your own nudes in
iCloud are profoundly unlikely to generate a false match.

Messages monitoring - or "If I send a nude to my partner, will I get
flagged?"

If you're an adult, you won't be flagged if you're sending nude images
to a partner. Apple's iMessage mechanism only applies to accounts
belonging to underage children, and is opt-in.

Even if it did flag an image sent between two consenting adults, it
wouldn't mean much. The system, as mentioned earlier, doesn't notify or
generate an external report. Instead, it uses on-device machine learning
to detect a potentially sensitive photo and warn the user about it.

And, again, it's meant to only warn parents that underage users are
sending the pictures.

What about pictures of my kids in the bath?

This concern stems from a misunderstanding of how Apple's iCloud Photos
scanning works. It's not looking at individual images contextually to
determine if a picture is CSAM. It's only comparing the image hashes of
photos stored in iCloud against a database of known CSAM maintained by
NCMEC.

In other words, unless the images of your kids are somehow hashed in
NCMEC's database, then they won't be affected by the iCloud Photo
scanning system. More than that, the system is only designed to catch
collections of known CSAM. So, even if a single image somehow managed to
generate a false positive (an unlikely event, which we've already
discussed), then the system still wouldn't flag your account.

What if somebody sends me illegal material?

If somebody was really out to get you, it's more likely that somebody
would break into your iCloud because the password the user set is "1234"
or some other nonsense, and dump the materials in there. But, if it
wasn't on your own network, it's defensible.

Again, the iCloud Photos system only applies to collections of images
stored on Apple's cloud photo system. If someone sent you known CSAM via
iMessage, then the system wouldn't apply unless it was somehow saved to
your iCloud.

Could someone hack into your account and upload a bunch of known CSAM to
your iCloud Photos? This is technically possible, but also highly
unlikely if you're using good security practices. With a strong password
and two-factor authentication, you'll be fine. If you've somehow managed
to make enemies of a person or organization able to get past those
security mechanisms, you likely have far greater concerns to worry
about.

Also, Apple's system allows people to appeal if their accounts are
flagged and closed for CSAM. If a nefarious entity did manage to get
CSAM onto your iCloud account, there's always the possibility that you
can set things straight.

And, it would still have to be moved to your Photo library for it to get
scanned.

Valid privacy concerns

Even with those scenarios out of the way, many tech libertarians have
taken issue with the Messages and iCloud Photo scanning mechanisms
despite Apple's assurances of privacy. A lot of cryptography and
security experts also have concerns.

For example, some experts worry that having this type of mechanism in
place could make it easier for authoritarian governments to clamp down
on abuse. Although Apple's system is only designed to detect CSAM,
experts worry that oppressive governments could compel Apple to rework
it to detect dissent or anti-government messages.

There are valid arguments about "mission creep" with technologies such
as this. Prominent digital rights nonprofit the Electronic Frontier
Foundation, for example, notes that a similar system originally intended
to detect CSAM has been repurposed to create a database of "terrorist"
content.

There are also valid arguments raised by the Messages scanning system.
Although messages are end-to-end encrypted, the on-device machine
learning is technically scanning messages. In its current form, it can't
be used to view the content of messages, and as mentioned earlier, no
external reports are generated by the feature. However, the feature
could generate a classifier that could theoretically be used for
oppressive purposes. The EFF, for example, questioned whether the
feature could be adapted to flag LGBTQ+ content in countries where
homosexuality is illegal.

At the same time, it's a slippery slope logical fallacy, assuming that
the end goal is inevitably these worst-case scenarios.

These are all good debates to have. However, levying criticism only at
Apple and its systems assuming that worst case is coming is unfair and
unrealistic. That's because pretty much every service on the internet
has long down the same type of scanning that Apple is now doing for
iCloud.

Apple's system is nothing new

A casual view of the Apple-related headlines might lead one to think
that the company's child safety mechanisms are somehow unique. They're
not.

As The Verge reported in 2014, Google has been scanning the inboxes of
Gmail users for known CSAM since 2008. Some of those scans have even led
to the arrest of individuals sending CSAM. Google also works to flag and
remove CSAM from search results.

Microsoft originally developed the system that the NCMEC uses. The
PhotoDNA system, donated by Microsoft, is used to scan image hashes and
detect CSAM even if a picture has been altered. Facebook started using
it in 2011, and Twitter adopted it in 2013. Dropbox uses it, and nearly
every internet file repository does as well.

Apple, too, has said that it scans for CSAM. In 2019, it updated its
privacy policy to state that it scans for "potentially illegal content,
including child sexual exploitation material." A 2020 story from The
Telegraph noted that Apple scanned iCloud for child abuse images.

Getting up in arms about Apple introducing a similar system is ignoring
the context. While Apple is uniquely positioned as a privacy-respecting
company, this type of CSAM scanning is commonplace among internet
companies — and for good reason.

Apple has found a middle ground

As with any Apple release, only the middle 75% of the argument is sane.
The folks screaming that privacy is somehow being violated, don't seem
to be that concerned about the fact that Twitter, Microsoft, Google, and
every other photo sharing service have been using these hashes for a
decade or more.

They also don't seem to care that Apple is gleaning zero information
about the hashing, or the Messages alerts to parents. Your photo library
with 2500 pictures of your dog, 1200 pictures of a Budweiser slowly
warming on the bow of your boat, and 250 pictures of the ground because
you hit the volume button accidentally while the camera app was open is
still just as opaque to Apple as it has ever been. And, as an adult,
Apple won't flag nor will it care that you are sending pictures of your
genitalia to another adult.

Arguments that this somehow levels the privacy playing field with other
products very much vary on how you personally define privacy. Apple is
still not using Siri to sell you products based on what you tell it,
Apple is still not looking at email metadata to serve you ads, and other
actually privacy-violating measures. Similar arguments that Apple should
do whatever it needs to do to protect society regardless of the cost are
similarly bland and nonsensical.

But, as usual with anything from Apple as of late, the messaging is
self-serving and incomplete. Furthermore, the lack of transparency is an
issue as it has always been.

It should be Apple's responsibility to deliver the information in a sane
and digestible manner for something as crucial and inflammatory as this
has inexplicably turned out to be. Apple is just the latest tech giant
to adopt a system like it, and not the first. Also like always, it has
generated the most internet-based wrath of them all.

With Apple's lack of transparency, comes the obvious wonder about if
Apple is doing what it says its doing, how it says its doing it.

There's no good answer to that question. We make decisions every day
about which brand to trust, based on our own criteria. If your criteria
is somehow that Apple is now worse than the other tech giants who have
been doing this longer, or the concern is that Apple might allow law
enforcement to subpoena your iCloud records if there's something illegal
in it like they've been doing since iCloud was launched, then it may be
time to finally make that platform move that you've probably been
threatening to make for 15 years or longer.

And if you go, you need to accept that your ISP will happily provide
your data to whoever asks for it, for the most part. And as it pertains
to hardware, the rest of the OS vendors have been doing this, or more
invasive versions of it, for longer. And, all of the other vendors have
much deeper privacy issues for users in day-to-day operations.

There's always a thin line between security and freedom, and a perfect
solution doesn't exist. Also arguably, the solution is more about
Apple's liability than anything else as it does not proactively hunt
down CSAM by context — but if it did that, then it would actually be
doing that privacy-violation that some folks are screaming about now,
since it would know pictures by context and content.

While privacy is a fundamental human right, any technology that curbs
the spread of CSAM is also inherently good. Apple has managed to develop
a system that works toward that latter goal without significantly
jeopardizing the average person's privacy.

Slippery-slope concerns about the disappearance of online privacy are
worth having. However, those conversations need to include all of the
players on the field — including those that much more clearly violate
the privacy of their users on a daily basis.

AppleInsider has affiliate partnerships and may earn commission on
products purchased through affiliate links. These partnerships do not
influence our editorial content.

--
E-mail sent to this address may be devoured by my ravenous SPAM filter.
I often ignore posts from Google. Use a real news client instead.

JR
0 new messages