TL;DR: No, it isn’t. If that’s all you wanted to know, you can stop reading.
Has anybody noticed that Apple just gave a talk about how they secured a master key that would allow en-masse brute-forcing of device PINs— Pwn All The Things (@pwnallthethings) August 9, 2016
Still crazy how Apple went to BlackHat, told 1000 attendees how they build a secure crypto backdoor for their data center and nobody noticed— Pwn All The Things (@pwnallthethings) August 11, 2016
Still, as you can see there’s been some talk on Twitter about the subject, and I’m afraid it could lead to a misunderstanding. That would be too bad, since Apple’s new technology is kind of a neat experiment.
So while I promise that this blog is not going to become all-Apple-all-the-time, I figured I’d take a minute to explain what I’m talking about. This post is loosely based on an explanation of Apple’s new escrow technology that Ivan Krstic gave at BlackHat. You should read the original for the fascinating details.
What is Cloud Key Vault (and what is iCloud Keychain)?
A few years ago Apple quietly introduced a new service called iCloud Keychain. This service is designed to allow you to back up your passwords and secret keys to the cloud. Now, if backing up your sensitive passwords gives you the willies, you aren’t crazy. Since these probably include things like bank and email passwords, you really want these to be kept extremely secure.
And — at least going by past experience — security is not where iCloud shines:
The problem here is that passwords need to be secured at a much higher assurance level than most types of data backup. But how can Apple ensure this? We can’t simply upload our secret passwords the way we upload photos of our kids. That would create a number of risks, including:
- The risk that someone will guess, reset or brute-force your iCloud password. Password resets are a particular problem. Unfortunately these seem necessary for normal iCloud usage, since people do forget their passwords. But that’s a huge risk when you’re talking about someone’s entire password collection.
- The risk that someone will break into Apple’s infrastructure. Even if Apple gets their front-end brute-forcing protections right (and removes password resets), the password vaults themselves are a huge target. You want to make sure that even someone who hacks Apple can’t get them out of the system.
- The risk that a government will compel Apple to produce data. Maybe you’re thinking of the U.S. government here. But that’s myopic: Apple stores iCloud data all over the world.
So clearly Apple needs a better way to protect these passwords. How do to it?
Why not just encrypt the passwords?
It is certainly possible for an Apple device to encrypt your password vault before sending it to iCloud. The problem here is that Apple doesn’t necessarily have a strong encryption key to do this with. Remember that the point of a backup is to survive the loss of your device, and thus we can’t assume the existence of a strong recovery key stored on your phone.
This leaves us with basically one option: a user password. This could be either the user’s iCloud password or their device passcode. Unfortunately for the typical user, these tend to be lousy. They may be strong enough to use as a login password — in a system that allows only a very limited number of login attempts. But the kinds of passwords typical users choose to enter on mobile devices are rarely strong enough to stand up to an offline dictionary attack, which is the real threat when using passwords as encryption keys.
(Even using a strong memory-hard password hash like scrypt — with crazy huge parameters — probably won’t save a user who chooses a crappy password. Blame phone manufacturers for making it painful to type in complicated passwords by forcing you to type them so often.)
So what’s Apple to do?
So Apple finds itself in a situation where they can’t trust the user to pick a strong password. They can’t trust their own infrastructure. And they can’t trust themselves. That’s a problem. Fundamentally, computer security requires some degree of trust — someone has to be reliable somewhere.
Apple’s solution is clever: they decided to make something more trustworthy than themselves. To create a new trust anchor, Apple purchased a bunch of fancy devices called Hardware Security Modules, or HSMs. These are sophisticated, tamper-resistant specialized computers that store and operate with cryptographic keys, while preventing even malicious users from extracting them. The high-end HSMs Apple uses also allow the owner to include custom programming.
Rather than trusting Apple, your phone encrypts its secrets under a hardcoded 2048-bit RSA public key that belongs to Apple’s HSM. It also encrypts a function of your device passcode, and sends the resulting encrypted blob to iCloud. Critically, only the HSM has a copy of the corresponding RSA decryption key, thus only the HSM can actually view any of this information. Apple’s network sees only an encrypted blob of data, which is essentially useless.
When a user wishes to recover their secrets, they authenticate themselves directly to the HSM. This is done using a user’s “iCloud Security Code” (iCSC), which is almost always your device passcode — something most people remember after typing it every day. This authentication is done using the Secure Remote Password protocol, ensuring that Apple (outside of the HSM) never sees any function of your password.
Now, I said that device passcodes are lousy secrets. That’s true when we’re talking about using them as encryption keys — since offline decryption attacks allow the attacker to make an unlimited number of attempts. However, with the assistance of an HSM, Apple can implement a common-sense countermeasure to such attacks: they limit you to a fixed number of login attempts. This is roughly the same protection that Apple implements on the devices themselves.
The encrypted contents of the data sent to the HSM (source). |
The upshot of all these ideas is that — provided that the HSM works as designed, and that it can’t be reprogrammed — even Apple can’t access your stored data except by logging in with a correct passcode. And they only get a limited number of attempts to guess correctly, after which the account locks.
This rules out both malicious insiders and government access, with one big caveat.
What stops Apple from just reprogramming its HSM?
This is probably the biggest weakness of the system, and the part that’s driving the “backdoor’ concerns above. You see, the HSMs Apple uses are programmable. This means that — as long as Apple still has the code signing keys — the company can potentially update the custom code it includes onto the HSM to do all sort sorts of things.
These things might include: programming the HSM to output decrypted escrow keys. Or disabling the maximum login attempt counting mechanism. Or even inserting a program that runs a brute-force dictionary attack on the HSM itself. This would allow Apple to brute-force your passcode and/or recover your passwords.
Fortunately Apple has thought about this problem and taken steps to deal with it. Note that on HSMs like the one Apple is using, the code signing keys live on a special set of admin smartcards. To remove these keys as a concern, once Apple is done programming the HSM, they run these cards through a process that they call a “physical one-way hash function”.
If that sounds complicated, here’s Ivan’s slightly simpler explanation.
So, with the code signing keys destroyed, updating the HSM to allow nefarious actions should not be possible. Pretty much the only action Apple can take is to wipe the HSM, which would destroy the HSM’s RSA secret keys and thus all of the encrypted records it’s responsible for. To make sure all admin cards are destroyed, the company has developed a complex ceremony for controlling the cards prior to their destruction. This mostly involves people making assertions that they haven’t made copies of the code signing key — which isn’t quite foolproof. But overall it’s pretty impressive.
The downside for Apple, of course, is that there had better not be a bug in any of their programming. Because right now there’s nothing they can do to fix it — except to wipe all of their HSMs and start over.
Couldn’t we use this idea to implement real crypto backdoors?
A key assertion I’ve heard is that if Apple can do this, then surely they can do something similar to escrow your keys for law enforcement. But looking at the system shows isn’t true at all.
To be sure, Apple’s reliance on a Hardware Security Module indicates a great deal of faith in a single hardware/software solution for storing many keys. Only time will tell if that faith is really justified. To be honest, I think it’s an overly-strong assumption. But iCloud Keychain is opt-in, so individuals can decide for themselves whether or not to take the risk. That wouldn’t be true of a mandatory law enforcement backdoor.
But the argument that Apple has enabled a law enforcement backdoor seems to miss what Apple has actually done. Instead of building a system that allows the company to recover your secret information, Apple has devoted enormous resources to locking themselves out. Only customers can access their own information. In other words, Apple has decided that the only way they can hold this information is if they don’t even trust themselves with it.
That’s radically different from what would be required to build a mandatory key escrow system for law enforcement. In fact, one of the big objections to such a backdoor — which my co-authors and I recently outlined in a report — is the danger that any of the numerous actors in such a system could misuse it. By eliminating themselves from the equation, Apple has effectively neutralized that concern.
If Apple can secure your passwords this way, then why don’t they do the same for your backed up photos, videos, and documents?
That’s a good question. Maybe you should ask them?