Shortly after reports today that Apple will start scanning iPhones for child-abuse images, the company confirmed its plan and provided details in a news release and technical summary.
"Apple's method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind," Apple's announcement said. "Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children) and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices."
Apple provided more detail on the CSAM detection system in a technical summary and said its system uses a threshold "set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."
The changes will roll out "later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey," Apple said. Apple will also deploy software that can analyze images in the Messages application for a new system that will "warn children and their parents when receiving or sending sexually explicit photos."
Apple accused of building “infrastructure for surveillance”
Despite Apple's assurances, security experts and privacy advocates criticized the plan.
"Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the US, but around the world," said Greg Nojeim, co-director of the Center for Democracy & Technology's Security & Surveillance Project. "Apple should abandon these changes and restore its users' faith in the security and integrity of their data on Apple devices and services."
For years, Apple has resisted pressure from the US government to install a "backdoor" in its encryption systems, saying that doing so would undermine security for all users. Apple has been lauded by security experts for this stance. But with its plan to deploy software that performs on-device scanning and share selected results with authorities, Apple is coming dangerously close to acting as a tool for government surveillance, Johns Hopkins University cryptography Professor Matthew Green suggested on Twitter.
The client-side scanning Apple announced today could eventually "be a key ingredient in adding surveillance to encrypted messaging systems," he wrote. "The ability to add scanning systems like this to E2E [end-to-end encrypted] messaging systems has been a major 'ask' by law enforcement the world over."
Message scanning and Siri “intervention”
In addition to scanning devices for images that match the CSAM database, Apple said it will update the Messages app to "add new tools to warn children and their parents when receiving or sending sexually explicit photos."
"Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages," Apple said.
When an image in Messages is flagged, "the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo." The system will let parents get a message if children do view a flagged photo, and "similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it," Apple said.
Apple said it will update Siri and Search to "provide parents and children expanded information and help if they encounter unsafe situations." The Siri and Search systems will "intervene when users perform searches for queries related to CSAM" and "explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue."
The Center for Democracy & Technology called the photo-scanning in Messages a "backdoor," writing:
The mechanism that will enable Apple to scan images in Messages is not an alternative to a backdoor—it is a backdoor. Client-side scanning on one "end" of the communication breaks the security of the transmission, and informing a third party (the parent) about the content of the communication undermines its privacy. Organizations around the world have cautioned against client-side scanning because it could be used as a way for governments and companies to police the content of private communications.
Apple’s technology for analyzing images
Apple's technical summary on CSAM detection includes a few privacy promises in the introduction. "Apple does not learn anything about images that do not match the known CSAM database," it says. "Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account."
Apple's hashing technology is called NeuralHash and it "analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value," Apple wrote.
Before an iPhone or other Apple device uploads an image to iCloud, the "device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image's NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image."
Using "threshold secret sharing," Apple's "system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content," the document said. "Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images."
While noting the 1-in-1 trillion probability of a false positive, Apple said it "manually reviews all reports made to NCMEC to ensure reporting accuracy." Users can "file an appeal to have their account reinstated" if they believe their account was mistakenly flagged.
User devices to store blinded CSAM database
User devices will store a "blinded database" that allows the device to determine when a photo matches a picture in the CSAM database, Apple explained:
First, Apple receives the NeuralHashes corresponding to known CSAM from the above child-safety organizations. Next, these NeuralHashes go through a series of transformations that includes a final blinding step, powered by elliptic curve cryptography. The blinding is done using a server-side blinding secret, known only to Apple. The blinded CSAM hashes are placed in a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This blinded database is securely stored on users' devices. The properties of elliptic curve cryptography ensure that no device can infer anything about the underlying CSAM image hashes from the blinded database.
An iPhone or other device will analyze user photos, compute a NeuralHash, and look up "the entry in the blinded hash table." The device "also uses the blinded hash that the system looked up to obtain a derived encryption key" and uses that encryption key "to encrypt the associated payload data."
Combined with other steps, this ensures that only images matching the CSAM database will be decrypted, Apple wrote:
If the user image hash matches the entry in the known CSAM hash list, then the NeuralHash of the user image exactly transforms to the blinded hash if it went through the series of transformations done at database setup time. Based on this property, the server will be able to use the cryptographic header (derived from the NeuralHash) and using the server-side secret, can compute the derived encryption key and successfully decrypt the associated payload data.
If the user image doesn't match, the above step will not lead to the correct derived encryption key, and the server will be unable to decrypt the associated payload data. The server thus learns nothing about non-matching images.
The device doesn't learn about the result of the match because that requires knowledge of the server-side blinding secret.
Finally, the client uploads the image to the server along with the voucher that contains the encrypted payload data and the cryptographic header.
As noted earlier, you can read the technical summary here. Apple also published a longer and more detailed explanation of the "private set intersection" cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.
492 Reader Comments
They contradict themselves on privacy in the very next sentence.
- A blackbox algorithm for determining if images match (with some level of fuzziness to tolerate resizing, rotation, etc)...
- Checks images on my phone against a blackbox database of hashes that someone swears that are really bad (that, of course, can't be verified in any way)...
- Gives me no way to determine if anything matches anything in that database...
- Except if there are too many matches, Apple decrypts them, disables my account (but I can appeal!), and delivers some blob of information about me to some group.
Apple bent before China when it came to iCloud servers, and somehow I'm supposed to trust that this impossible-to-identify database of things that, if matched, will easily ruin people's lives, won't be abused over the years to come by various governments who don't like particular memes (see "Pooh Bear and China" for an example)?
... and I'm also supposed to believe that the phone can reliably identify "sexually explicit images," when if you put 20 adults in a room and ask them to sort what is and isn't a sexually explicit image, you'll get nothing nearly resembling agreement. Meanwhile, other machine learning projects can't tell the difference between the moon, low in the sky, and a traffic light.
Great.
I'm going to sit down with the papers they've released this weekend and dig through them in depth, but... I'm thinking that I may simply be done with Apple. And at this point, that means, "Done with smartphones." If Android hasn't gone this route yet, they certainly will be soon.
I lived before smartphones, and it's looking more and more like I'm going to live after smartphones too. At least for me.
If this sort of thing doesn't lead to a dramatic impact in Apple's bottom line about devices, the message they'll get is loud and clear: "Nobody cares if we scan your content, on your devices, for what someone else said is bad."
Looks like roughly a billion Apple users currently, for context.
Edit: missed it at first, but apparently it's "one in one trillion chance per year of incorrectly flagging a given account". Which, if accurate, means closer to a 1/1000 chance of ANY account getting falsely flagged?
Last edited by nehinks on Thu Aug 05, 2021 6:46 pm
From your prior posts, you never have.
The machine learning to identify sexting seems like a biiiiig can of worms. If they know the operator is a child, and they are pretty sure it's a sext, and they let them send it or receive it, that seems like a trial lawyers dream scenario. "$2T corporation knowingly lets kids sext!" or "Apple knew my little Johnny was receiving scary images, and they let it happen!".
But..
How is it ok to solve for that by "scanning everyones photos whether they like it or not"?
This is like having a policy of just going door to door and randomly strip searching your house if the government wants to -- even without cause.
I'm sorry, but this is dystopian stuff
Last edited by laitpojes on Thu Aug 05, 2021 6:47 pm
Hardly "all of the sudden". They've been "at odds" with, or under pressure from, law enforcement for years (in some places).
Last edited by Sajuuk on Thu Aug 05, 2021 6:50 pm
I can't believe after all the work Apple has done to push the privacy angle that they're going to ruin it all by including this backdoor in all their systems.
Last edited by Seminal on Thu Aug 05, 2021 6:52 pm
A pixel with a custom ROM.
It takes all of five-10 minutes using a web based UI to load Graphene OS on a pixel device. It is pretty simple.
Apple: We are going to scan all iPhones for possible illegal content and report it to the government!"
Why the fuck did Apple decide to do this, on users' phones no less?
It really seems like Apple is doing this to get ahead of some regulatory hurdle in the US or elsewhere, or to try to force Google do something. Apple would only do this if it ultimately saves or makes Apple craploads of money, because no user is asking for this. I want to know exactly why they decided to spend a bunch of money on this feature that doesn't benefit consumers in any way.
Last edited by Skeppy on Thu Aug 05, 2021 6:48 pm
This is from the company that makes Siri
On this point, they aren't claiming to recognize whether arbitrary photographs contain CP, but rather are looking for whether known instances of CP from a law enforcement database exist on your phone. This is a more tractable problem, but I still question their 1/trillion false positive rate after seeing the false positive rate of other content identification systems rolled out for copyright enforcement.
Edit: Nevermind, I finished reading the story and see you were commenting on a different part. Yeah, there are sure to be false positives there, and I'm certain parents will overreact when receiving notifications for these false positives.
Last edited by pavon on Thu Aug 05, 2021 7:05 pm
I can understand from Apple's point of view why this would be desirable, it gives them a definitive way to demonstrate that iCloud probably isn’t hosting a crap ton of CP. Without having the ability to decrypt the actual photos.
If it limited only to iCloud photos, then I’m slightly more open to the idea. The whole thing still gives me the creeps, and it easy to see how the scope of this system could easily expand. But I can also appreciate that Apple don’t want to become the image host of choice for CP archives.
Last edited by AvianLyric on Thu Aug 05, 2021 6:51 pm
Pick your poison
It's going to be difficult to push back on government requests. And the NSO Group and likes are gonna take a crack at this.
This isn't going to end well.
Last edited by kfvg on Thu Aug 05, 2021 6:54 pm
This.
It will never end, and just keep expanding, and the ramifications of that are just terrible.
It’s surprising that they’re doing the matching at upload time rather than at share time, since that is ostensibly what they’re trying to prevent. But that’s one way of keeping everything on device.
It’s also unclear what happens to *existing* photos already uploaded to iCloud Photos. Or photos backed up to iCloud through iCloud Backup rather than iCloud Photos.
My biggest concern is that the hashes are secret and there is no way for *anyone* to verify that Apple (a) is actually checking the images before forwarding on a report to law enforcement and (b) is not being coerced by law enforcement to include additional hashes unrelated to CSAM.
Given that Apple has just spent hundreds of millions on a campaign selling the virtues of their privacy approach with iOS, I suspect they’ve had some significant behind-the-scenes lobbying to push this product out the door in the United States. I’m wondering about the quid pro quo.
Last edited by DiscountTent on Thu Aug 05, 2021 6:54 pm
The classifier for novel photos seems to make that worse: a repressive government can use false-positives to justify searches in the same way that American cops can ask their dog to give them permission to stop anyone and search their effects. It’s described as only for children but I can’t imagine some government won’t pass a law saying has to be enabled for their entire country.
It will never end, and just keep expanding, and the ramifications of that are just terrible.
Sir,
We have scanned your comment post and the hash matches a record in our thought crime database.
Please remain where you are (running will not work as we can track your movements) , a thought crime breach team will be with you within [2:00] minutes, do not attempt to change your thinking , we will do that for you at a later date.
Thanks for your cooperation
The Team.
Or for copyrighted images or other media. Bringing the equivalent of Content ID to your personal device.
Seems like a stretch that it would be used for such a purpose, but I wouldn't have thought Apple would be scanning users encrypted, private phone data for purported illegal content, exfiltrating it to Apple and reporting it to the government. So what used to be a ridiculous stretch is now entirely possible.
Apple: We are going to scan all iPhones for possible illegal content and report it to the government!"
Why the fuck did Apple decide to do this, on users' phones no less?
It really seems like Apple is doing this to get ahead of some regulatory hurdle in the US or elsewhere, or to try to force Google do something. Apple would only do this if it ultimately saves or makes Apple craploads of money, because no user is asking for this. I want to know exactly why they decided to spend a bunch of money on this feature that doesn't
benefit consumers in any way.
To avoid regulation.
You don’t need to regulate us, we are regulating ourselves! Robustly!
You must login or create an account to comment.