Apple plan raises security concerns —

Apple explains how iPhones will scan photos for child-sexual-abuse images

Apple offers technical details, claims 1-in-1 trillion chance of false positives.

Close-up shot of female finger scrolling on smartphone screen in a dark environment.

Shortly after reports today that Apple will start scanning iPhones for child-abuse images, the company confirmed its plan and provided details in a news release and technical summary.

"Apple's method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind," Apple's announcement said. "Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children) and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices."

Apple provided more detail on the CSAM detection system in a technical summary and said its system uses a threshold "set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."

The changes will roll out "later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey," Apple said. Apple will also deploy software that can analyze images in the Messages application for a new system that will "warn children and their parents when receiving or sending sexually explicit photos."

Apple accused of building “infrastructure for surveillance”

Despite Apple's assurances, security experts and privacy advocates criticized the plan.

"Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the US, but around the world," said Greg Nojeim, co-director of the Center for Democracy & Technology's Security & Surveillance Project. "Apple should abandon these changes and restore its users' faith in the security and integrity of their data on Apple devices and services."

For years, Apple has resisted pressure from the US government to install a "backdoor" in its encryption systems, saying that doing so would undermine security for all users. Apple has been lauded by security experts for this stance. But with its plan to deploy software that performs on-device scanning and share selected results with authorities, Apple is coming dangerously close to acting as a tool for government surveillance, Johns Hopkins University cryptography Professor Matthew Green suggested on Twitter.

The client-side scanning Apple announced today could eventually "be a key ingredient in adding surveillance to encrypted messaging systems," he wrote. "The ability to add scanning systems like this to E2E [end-to-end encrypted] messaging systems has been a major 'ask' by law enforcement the world over."

Message scanning and Siri “intervention”

In addition to scanning devices for images that match the CSAM database, Apple said it will update the Messages app to "add new tools to warn children and their parents when receiving or sending sexually explicit photos."

"Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages," Apple said.

When an image in Messages is flagged, "the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo." The system will let parents get a message if children do view a flagged photo, and "similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it," Apple said.

Apple said it will update Siri and Search to "provide parents and children expanded information and help if they encounter unsafe situations." The Siri and Search systems will "intervene when users perform searches for queries related to CSAM" and "explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue."

The Center for Democracy & Technology called the photo-scanning in Messages a "backdoor," writing:

The mechanism that will enable Apple to scan images in Messages is not an alternative to a backdoor—it is a backdoor. Client-side scanning on one "end" of the communication breaks the security of the transmission, and informing a third party (the parent) about the content of the communication undermines its privacy. Organizations around the world have cautioned against client-side scanning because it could be used as a way for governments and companies to police the content of private communications.

Apple’s technology for analyzing images

Apple's technical summary on CSAM detection includes a few privacy promises in the introduction. "Apple does not learn anything about images that do not match the known CSAM database," it says. "Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account."

Apple's hashing technology is called NeuralHash and it "analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value," Apple wrote.

Before an iPhone or other Apple device uploads an image to iCloud, the "device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image's NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image."

Using "threshold secret sharing," Apple's "system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content," the document said. "Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images."

While noting the 1-in-1 trillion probability of a false positive, Apple said it "manually reviews all reports made to NCMEC to ensure reporting accuracy." Users can "file an appeal to have their account reinstated" if they believe their account was mistakenly flagged.

User devices to store blinded CSAM database

User devices will store a "blinded database" that allows the device to determine when a photo matches a picture in the CSAM database, Apple explained:

First, Apple receives the NeuralHashes corresponding to known CSAM from the above child-safety organizations. Next, these NeuralHashes go through a series of transformations that includes a final blinding step, powered by elliptic curve cryptography. The blinding is done using a server-side blinding secret, known only to Apple. The blinded CSAM hashes are placed in a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This blinded database is securely stored on users' devices. The properties of elliptic curve cryptography ensure that no device can infer anything about the underlying CSAM image hashes from the blinded database.

An iPhone or other device will analyze user photos, compute a NeuralHash, and look up "the entry in the blinded hash table." The device "also uses the blinded hash that the system looked up to obtain a derived encryption key" and uses that encryption key "to encrypt the associated payload data."

Combined with other steps, this ensures that only images matching the CSAM database will be decrypted, Apple wrote:

If the user image hash matches the entry in the known CSAM hash list, then the NeuralHash of the user image exactly transforms to the blinded hash if it went through the series of transformations done at database setup time. Based on this property, the server will be able to use the cryptographic header (derived from the NeuralHash) and using the server-side secret, can compute the derived encryption key and successfully decrypt the associated payload data.

If the user image doesn't match, the above step will not lead to the correct derived encryption key, and the server will be unable to decrypt the associated payload data. The server thus learns nothing about non-matching images.

The device doesn't learn about the result of the match because that requires knowledge of the server-side blinding secret.

Finally, the client uploads the image to the server along with the voucher that contains the encrypted payload data and the cryptographic header.

As noted earlier, you can read the technical summary here. Apple also published a longer and more detailed explanation of the "private set intersection" cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.

Ars Video

Sitrep: Azerbaijan's TB2 Drone

492 Reader Comments

Sort comments by...
Sort comments by...
Chronological
Insightful
Highest Voted
Funniest
  1. I have mixed feelings about this. I mean, I’m all for protecting children, but this seems like the top of a slippery slope…
    391 posts | registered
  2. It's always "think of the children" when new intentional backdoors are announced.
    42 posts | registered
  3. Apple, I love you, but you don’t need to play police.
    155 posts | registered
  4. Nothing to hide here, but I wonder how long I can hold out upgrading to iOS15. This is unacceptable, there has to be another way.
    828 posts | registered
  5. "Apple's method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind," Apple's announcement said. "Instead of scanning images in the cloud, the system performs on-device matching"

    They contradict themselves on privacy in the very next sentence.
    1763 posts | registered
  6. Well, score one for the Financial Times, huh.
    6382 posts | registered
  7. Alright. So:

    - A blackbox algorithm for determining if images match (with some level of fuzziness to tolerate resizing, rotation, etc)...

    - Checks images on my phone against a blackbox database of hashes that someone swears that are really bad (that, of course, can't be verified in any way)...

    - Gives me no way to determine if anything matches anything in that database...

    - Except if there are too many matches, Apple decrypts them, disables my account (but I can appeal!), and delivers some blob of information about me to some group.


    Apple bent before China when it came to iCloud servers, and somehow I'm supposed to trust that this impossible-to-identify database of things that, if matched, will easily ruin people's lives, won't be abused over the years to come by various governments who don't like particular memes (see "Pooh Bear and China" for an example)?

    ... and I'm also supposed to believe that the phone can reliably identify "sexually explicit images," when if you put 20 adults in a room and ask them to sort what is and isn't a sexually explicit image, you'll get nothing nearly resembling agreement. Meanwhile, other machine learning projects can't tell the difference between the moon, low in the sky, and a traffic light.

    Great.

    I'm going to sit down with the papers they've released this weekend and dig through them in depth, but... I'm thinking that I may simply be done with Apple. And at this point, that means, "Done with smartphones." If Android hasn't gone this route yet, they certainly will be soon.

    I lived before smartphones, and it's looking more and more like I'm going to live after smartphones too. At least for me.

    If this sort of thing doesn't lead to a dramatic impact in Apple's bottom line about devices, the message they'll get is loud and clear: "Nobody cares if we scan your content, on your devices, for what someone else said is bad."
    24672 posts | registered
  8. That one in a trillion number needs some context. Is that per matching attempt (ie, per photo)? Per user account? Per year? Depending on the rate denominator, it could balloon quickly.

    Looks like roughly a billion Apple users currently, for context.

    Edit: missed it at first, but apparently it's "one in one trillion chance per year of incorrectly flagging a given account". Which, if accurate, means closer to a 1/1000 chance of ANY account getting falsely flagged?

    Last edited by nehinks on Thu Aug 05, 2021 6:46 pm

    4910 posts | registered
  9. They sure are vague on exactly how the NeuralHash is computed. Which is probably necessary for their purposes, because if they explained it more clearly then it could probably be worked around. But I do not find it acceptable to have such a black box on my phone enabling other humans access to my private photos if it decides that they are suspect.
    6918 posts | registered
  10. There are so many slippery slope potential issues with starting down this path and where it might lead.
    346 posts | registered
  11. Apple: please don't look at my photos.
    37277 posts | registered
  12. This is absolutely not acceptable, and I will never again buy an Apple product.
    7757 posts | registered
  13. Law enforcement wanted a backdoor, and now they have one. Thanks Apple!
    73 posts | registered
  14. Now what phone am I supposed to get?
    2196 posts | registered
  15. malor wrote:
    This is absolutely not acceptable, and I will never again buy an Apple product.


    From your prior posts, you never have.
    37277 posts | registered
  16. Little weird apple decided it was in the child porn detection business all of a sudden.
    3136 posts | registered
  17. The hash-match-only approach seems sound, albeit the start of a heinous slippery slope. One can image Trump 2024 or Xi 2021 asking Apple to find all people that have some anti-trump file on their phone.

    The machine learning to identify sexting seems like a biiiiig can of worms. If they know the operator is a child, and they are pretty sure it's a sext, and they let them send it or receive it, that seems like a trial lawyers dream scenario. "$2T corporation knowingly lets kids sext!" or "Apple knew my little Johnny was receiving scary images, and they let it happen!".
    2423 posts | registered
  18. So...**obviously** child sexual abuse photos are an abhorrent thing..

    But..

    How is it ok to solve for that by "scanning everyones photos whether they like it or not"?

    This is like having a policy of just going door to door and randomly strip searching your house if the government wants to -- even without cause.

    I'm sorry, but this is dystopian stuff

    Last edited by laitpojes on Thu Aug 05, 2021 6:47 pm

    346 posts | registered
  19. Little weird apple decided it was in the child porn detection business all of a sudden.

    Hardly "all of the sudden". They've been "at odds" with, or under pressure from, law enforcement for years (in some places).

    Last edited by Sajuuk on Thu Aug 05, 2021 6:50 pm

    6382 posts | registered
  20. I've been using and promoting Apple products since I got a Macintosh SE in 1988. Presently, I have a Mac Book Pro. If this goes ahead as announced, I won't be upgrading to macOS Monterey and won't be buying another Apple product.
    57 posts | registered
  21. So what's to stop Apple or someone else from checking your photo hashes against a different database instead of CSAM. Say a database which includes "anti-government" photos/images/logos?

    I can't believe after all the work Apple has done to push the privacy angle that they're going to ruin it all by including this backdoor in all their systems.

    Last edited by Seminal on Thu Aug 05, 2021 6:52 pm

    131 posts | registered
  22. teknik wrote:
    Now what phone am I supposed to get?


    A pixel with a custom ROM.

    It takes all of five-10 minutes using a web based UI to load Graphene OS on a pixel device. It is pretty simple.
    295 posts | registered
  23. "Nobody:

    Apple: We are going to scan all iPhones for possible illegal content and report it to the government!"


    Why the fuck did Apple decide to do this, on users' phones no less?

    It really seems like Apple is doing this to get ahead of some regulatory hurdle in the US or elsewhere, or to try to force Google do something. Apple would only do this if it ultimately saves or makes Apple craploads of money, because no user is asking for this. I want to know exactly why they decided to spend a bunch of money on this feature that doesn't benefit consumers in any way.

    Last edited by Skeppy on Thu Aug 05, 2021 6:48 pm

    263 posts | registered
  24. "1-in-1 trillion chance of false positives."

    This is from the company that makes Siri
    346 posts | registered
  25. So what is going to stop scammers from randomly texting iPhone users with child porn for Apple to detect and then turn over to the cops?
    550 posts | registered
  26. Syonyk wrote:
    ... and I'm also supposed to believe that the phone can reliably identify "sexually explicit images," when if you put 20 adults in a room and ask them to sort what is and isn't a sexually explicit image, you'll get nothing nearly resembling agreement. Meanwhile, other machine learning projects can't tell the difference between the moon, low in the sky, and a traffic light.


    On this point, they aren't claiming to recognize whether arbitrary photographs contain CP, but rather are looking for whether known instances of CP from a law enforcement database exist on your phone. This is a more tractable problem, but I still question their 1/trillion false positive rate after seeing the false positive rate of other content identification systems rolled out for copyright enforcement.

    Edit: Nevermind, I finished reading the story and see you were commenting on a different part. Yeah, there are sure to be false positives there, and I'm certain parents will overreact when receiving notifications for these false positives.

    Last edited by pavon on Thu Aug 05, 2021 7:05 pm

    1833 posts | registered
  27. Reading the details, it seems this scanning and matching only applies to photos uploaded to iCloud. So if you have iCloud disabled, then no scanning or matching happens.

    I can understand from Apple's point of view why this would be desirable, it gives them a definitive way to demonstrate that iCloud probably isn’t hosting a crap ton of CP. Without having the ability to decrypt the actual photos.

    If it limited only to iCloud photos, then I’m slightly more open to the idea. The whole thing still gives me the creeps, and it easy to see how the scope of this system could easily expand. But I can also appreciate that Apple don’t want to become the image host of choice for CP archives.

    Last edited by AvianLyric on Thu Aug 05, 2021 6:51 pm

    5 posts | registered
  28. “The flag, the Bible and children”

    Pick your poison
    74 posts | registered
  29. Once implemented, it's going to be trivial to use this against a different data set - like documents shared by a whistleblower to journalists.

    It's going to be difficult to push back on government requests. And the NSO Group and likes are gonna take a crack at this.

    This isn't going to end well.

    Last edited by kfvg on Thu Aug 05, 2021 6:54 pm

    39 posts | registered
  30. rayer wrote:
    So what is going to stop scammers from randomly texting iPhone users with child porn for Apple to detect and then turn over to the cops?

    This.
    35 posts | registered
  31. I think we are headed to dark dark places if we really start normalizing "scanning user data" in this way.

    It will never end, and just keep expanding, and the ramifications of that are just terrible.
    346 posts | registered
  32. The way the feature is (initially) limited is curious; specifically it will focus on iCloud Photos. This service offers both a library of your own photos as well as the ability to share photos with others.

    It’s surprising that they’re doing the matching at upload time rather than at share time, since that is ostensibly what they’re trying to prevent. But that’s one way of keeping everything on device.

    It’s also unclear what happens to *existing* photos already uploaded to iCloud Photos. Or photos backed up to iCloud through iCloud Backup rather than iCloud Photos.

    My biggest concern is that the hashes are secret and there is no way for *anyone* to verify that Apple (a) is actually checking the images before forwarding on a report to law enforcement and (b) is not being coerced by law enforcement to include additional hashes unrelated to CSAM.

    Given that Apple has just spent hundreds of millions on a campaign selling the virtues of their privacy approach with iOS, I suspect they’ve had some significant behind-the-scenes lobbying to push this product out the door in the United States. I’m wondering about the quid pro quo.
    32 posts | registered
  33. The Chinese Communist Party has a 100% chance of abusing this.

    Last edited by DiscountTent on Thu Aug 05, 2021 6:54 pm

    97 posts | registered
  34. My big question is how building this infrastructure makes it hard for them to resist government abuse. Right now if, say, the Russian government demands that Apple identify gay people or fans of Navalny, they can honestly say that they have no way to do so. With this infrastructure, it seems like they could request additions to the CSAM database for, say, pictures of certain people, fliers, etc. and then it turns into a power play based on their willingness to play along because everyone knows the infrastructure is there.

    The classifier for novel photos seems to make that worse: a repressive government can use false-positives to justify searches in the same way that American cops can ask their dog to give them permission to stop anyone and search their effects. It’s described as only for children but I can’t imagine some government won’t pass a law saying has to be enabled for their entire country.
    2443 posts | registered
  35. laitpojes wrote:
    I think we are headed to dark dark places if we really start normalizing "scanning user data" in this way.

    It will never end, and just keep expanding, and the ramifications of that are just terrible.



    Sir,

    We have scanned your comment post and the hash matches a record in our thought crime database.

    Please remain where you are (running will not work as we can track your movements) , a thought crime breach team will be with you within [2:00] minutes, do not attempt to change your thinking , we will do that for you at a later date.

    Thanks for your cooperation
    The Team.
    7723 posts | registered
  36. this is completely unacceptable and a major invasion of privacy. good reason to not have an iphone.
    2 posts | registered
  37. kfvg wrote:
    Once implemented, it's going be trivial to use this against a different data set - like documents shared by a whistleblower to journalists.


    Or for copyrighted images or other media. Bringing the equivalent of Content ID to your personal device.

    Seems like a stretch that it would be used for such a purpose, but I wouldn't have thought Apple would be scanning users encrypted, private phone data for purported illegal content, exfiltrating it to Apple and reporting it to the government. So what used to be a ridiculous stretch is now entirely possible.
    263 posts | registered
  38. Skeppy wrote:
    "Nobody:

    Apple: We are going to scan all iPhones for possible illegal content and report it to the government!"


    Why the fuck did Apple decide to do this, on users' phones no less?

    It really seems like Apple is doing this to get ahead of some regulatory hurdle in the US or elsewhere, or to try to force Google do something. Apple would only do this if it ultimately saves or makes Apple craploads of money, because no user is asking for this. I want to know exactly why they decided to spend a bunch of money on this feature that doesn't

    benefit consumers in any way.


    To avoid regulation.

    You don’t need to regulate us, we are regulating ourselves! Robustly!
    97 posts | registered

You must to comment.