Defeating the iPhone Restricted Mode

Recently, Apple introduced restricted mode to protect iPhones from attacks by companies like Cellebrite and Greyshift, which allow attackers to recover information from a phone without the password or fingerprint. Elcomsoft just announced that it can easily bypass it.

There is an important lesson in this: security is hard. Apple Computer has one of the best security teams on the planet. This feature was not tossed out in a day; it was designed and implemented with a lot of thought and care. If this team could make a mistake like this, imagine how bad a security feature is when implemented by a team without this kind of expertise.

This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is “secure.” We know that they don’t have the requisite security expertise to design and implement security properly. We know they didn’t take the time and care. We know that their engineers think they understand security, and designed to a level that they couldn’t break.

Getting security right is hard for the best teams on the world. It’s impossible for average teams.

Posted on July 18, 2018 at 6:25 AM40 Comments

Comments

Alan July 18, 2018 6:40 AM

It is especially hard to get security right on something that was not designed to be secure (like the iPhone)

Greg July 18, 2018 7:29 AM

I suppose the link to the Schneier’s Law post was more useful and generally relevant, but I’m still a bit disappointed that it wasn’t a link to the screwdriver-defeatable fingerprint lock.

Weather July 18, 2018 7:54 AM

Would access to tools minimise this type of stuff, if you had to make the tools to then break it, would that not add years before a exploit is found, and a new release would be out before then

Sorry..

Dave Scocca July 18, 2018 8:10 AM

As I understand it, this is only a limited bypass, in that it only works if you have physical access to the phone within an hour of the last time it was unlocked. If the hour has passed by the time you get access to the phone, then this doesn’t bypass Restricted Mode.

That is–it doesn’t actually bypass the restrictions of Restricted Mode, it just in some circumstances prevents Restricted Mode from engaging.

There is also a fairly straightforward tradeoff here between security and convenience. Getting rid of the one-hour window would require users to re-authenticate more often–if I’ve been using my phone and want to plug it into a smart charger or into my car, how long of a lag since my last use should there need to be before I have to use face, fingerprint, or passcode to unlock it for that purpose?

Edward Brode July 18, 2018 8:23 AM

One thing that underpins the Dunning Kruger effect is the logical fallacy, “Confirmation Bias.” I have an acquaintance who is a self-taught cook and proud of it. He had a poor teacher, of course, totally incompetent and he kept trying to cook the food of his native Italy until it tasted right to him. He has invented all sorts of silly ideas like, “put the salt in your dish early” and “don’t leave your pot on a pilot light.” All of which he swears make the dish taste different. And it does to him.

This is what makes people proud of their lack of credentials in security. “The lone wolf working outside of the mainstream comes up with an unbreakable code.”

Chris July 18, 2018 8:58 AM

In the real world it does not matter if security is hard, because marketing is easy.

Moving iCloud data to Chinese servers including encryption keys? Just point out how much better user experience will be.

Charles T July 18, 2018 10:04 AM

“This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is “secure.” We know that they don’t have the requisite security expertise to design and implement security properly. ”

Sorry, this is inconsistent with the whole post and sounds like you’re saying that their problem is, they didn’t ask permission. How dare a “random company” do anything without asking “us” first. Don’t they know we’re the experts and if anyone was going to invent something new and better it would be “us” and since you ain’t “us” whatever you designed is by definition bad.

This thinking does not account for all the products that DO what they claim. I hardly think you’re asserting that NO security products or services do what they claim.

Again, your post begins with the observation that something designed by experts at Apple was ultimately less than claimed, but suddenly leaps into something about random companies.
Actually, given YOUR logic here, if the experts failed and by demonstration NOT ALL security products and services fail, then the likelihood other engineers can succeed increases. The fact that SOME engineers failed does not support your thesis.

And what exactly does “anyone can design a system that they themselves cannot break” even mean? Ask a dozen people and you’ll get a dozen different interpretations. Does it mean that absolutely no one can design a system that cannot be broken? Does it mean that, no matter who you are, only someone else can design a system that cannot be broken? What a contradiction.

Another glaring problem with this post is the assumption that all systems are the same. Maybe what you mean is that, because block ciphers generally cannot be proven to be secure and only after a whole bunch of people over a long time try and defeat it, does confidence increase. Because, there are a lot of system components that DO work and can be proven to be correct and do what is expected.

What exactly is this post intended to do? I didn’t want to suggest you need to guard your reputation, so let me ask another question by way of example.

Of all the countless software developers using frameworks such as .NET, and implementing classes as MS prescribes – these are well tested software components that belong to namespaces being used by thousands of developers everyday – are you saying that no matter what they do it is broken from the beginning and will never work as designed? That’s crazy. Even if you cite a bunch of examples in which software was poorly designed that argument cannot take away from designs that do what they’re supposed to do.

So, even if a company (a company Bruce Schneier has never heard of and failed to obtain IBM’s permission first) uses the best static analysis tools at least VS, Resharper, NDepend, then submits the assembly to independent third-parties for analysis like Veracode, it doesn’t make any difference. It’s snake oil and by definition will fail.

I think you have some kind of personal problem.

billmarker July 18, 2018 10:11 AM

Right now, there are a lot of people at a lot of companies working extremely hard to solve new and very hard problems that are coming up everyday. They’re burning through a lot of time and money, racing to solve very, very difficult problems. Some solutions may be unique. What they don’t need is another conceited Harvard snot sitting on his perch squawking how everyone else is stupid (except for him).

wumpus July 18, 2018 10:15 AM

@Greg:
I doubt the doghouse has room for all the crowdfunded “security” products out there.
– Note that claim “invulnerable to an attacker without a screwdriver” appears to be for this (or competitor): https://www.rei.com/product/624081/garcia-bear-resistant-container
even if it equally applies to the doghouse product (note the link hardly makes outlandish claims about being “bearproof”, so maybe a competitor. But the “key or screwdriver” mechanism appears to be the same).

Anura July 18, 2018 10:32 AM

@Charles T

You are really trying hard to find stuff to object to. I’ll respond to just this:

And what exactly does “anyone can design a system that they themselves cannot break” even mean? Ask a dozen people and you’ll get a dozen different interpretations. Does it mean that absolutely no one can design a system that cannot be broken? Does it mean that, no matter who you are, only someone else can design a system that cannot be broken? What a contradiction.

I think you read too fast. Those aren’t valid interpretations of the sentence, which is very unambiguous. You are capable of making a lock you, yourself can’t pick. You are capable of designing software that you, yourself can’t find security holes in. You are capable of designing a cipher that you, yourself cannot break. This is true for everyone. It’s making these things secure against skilled professionals that is hard. Even skilled professionals get it wrong more often than not.

Unless you take the time to educate yourself on this stuff, you won’t be able to tell whether your security is good, you will only know that you can’t break it. And this stuff is hard. Really hard, and requires a serious background in mathematics and logic. Asking an amateur to make a secure product is like asking an amateur to transplant a liver – they will not know what they are doing, even if they grasp the concepts.

Major companies do stuff like this all the time. Instead of paying someone who knows what they are doing, they pass it off to someone in-house who doesn’t know any better.

Sofa July 18, 2018 10:47 AM

This has been fixed in the newest iOS 12 Beta 4, from the release notes:

USB Accessories
New Features

• To improve security, iOS 12 beta may require you unlock your passcode-protected iPhone,iPad, or iPod touch in order to connect it to a Mac, PC, or USB accessory.
• If you use iPod Accessory Protocol (iAP) USB accessories over the Lightning connector
(such as CarPlay, assistive devices, charging accessories, or storage carts) or you
connect to a Mac or PC you might need to unlock your device to recognize the accessory.
If you don’t unlock your device, it won’t communicate with the accessory or computer, and
it won’t charge. Note that you don’t need to unlock your device to charge using an Apple
USB power adapter.
• If a USB accessory isn’t recognized after you unlock your device, disconnect it, unlock
your device, and reconnect the accessory.
• If you normally use a USB assistive device to enter your passcode, you may allow it to
communicate with your device while it is locked by enabling “USB Accessories” in
Settings > Face ID/Touch ID & Passcode.

jerry 55 July 18, 2018 11:20 AM

I’ve been managing development teams for a long time. This post is what we regard as a team-killer, elitist mentality that destroys productivity. It goes something like this: to stay ahead of the pack you need to pull the rug out from under everyone else. Maybe this is why IBM’s stock is in the tank.

billmarker July 18, 2018 11:24 AM

@Anura

Hey, maybe it means that if someone designs a system you can break then it’s better.

Or, if anyone can design a system that they can’t break, than no one can design a system they can break. Hmm.. discrete mathematics textbooks make it worse still.

Jesse Thompson July 18, 2018 1:49 PM

@Charles T @billmarker
Wow, lots of hate on Bruce in these comments.

I will go so far as to say Bruce is probably being way too hard on Apple in particular here. The exploit was relatively minor (against a relatively hard-to-hit surface area) and apparently quickly patched.

But probably more importantly is the fact that security isn’t a binary, and I’m a bit confused why Bruce is describing it as one. I’m pretty sure he’s among the people who initially impressed the opposite upon me in the first place.

Deterrent security like encryption is always nothing more than a game of making it more expensive for attackers to break the security and get the asset than the value of the asset (minus the expense of said security).

So trying to compare Apple’s exploits with less well funded organizations isn’t fair because Apple also has more expensive and highly publicized assets to secure, and more complicated surface areas (devices negligently administered by millions of disparate people most of whom affect helplessness relating to anything digital) to defend against a much more well-funded and committed list of adversaries (potentially all national governments on Earth combined).

It’s like asking an 8 year scrapper in a rough neighborhood why he thinks he’s a good fighter when even George Foreman can lose to Mohamed Ali. I mean, that would relate to him how, precisely? He should give up and let the school bully take his lunch money because one anonymous heavyweight lost to another one? 😛

I think the lesson is fine that it’s easy to delude oneself into thinking you’ve built something secure presuming you never offer it to anyone smarter than yourself for cryptanalysis. @Charles T and @billmarker are wrong because Bruce isn’t suggesting that he personally has to approve all security, or that any specific group of people have to, just that your security can never reasonably be expected to withstand attack from anyone more experienced than the most experienced people who have audited it..

.. and that the broad Internet tends to have some number of such people in it so “good enough” can have a surprisingly high bar for anything public-facing.

TimH July 18, 2018 1:54 PM

@All those whose feels are hurted: Firstly, try rewriting Bruce’s post in a way that still does say that Apple’s team screwed up, when they are big enough to know better.

Bruce and similar security experts – yes, experts – have been harping on for donkey’s years how electronic companies have repeatedly make security-neophyte implementation errors. Yet we get debacle after debacle because general purpose hardware and software engineers think they can do themselves.

Hmm July 18, 2018 3:09 PM

Maybe we all should just avoid using the word “secure”?

“secure” = a measure of time based on an assumed knowledge of attacker capabilities.

We can call things “bearproof” because the capabilities of bears are FAIRLY well known.
The capabilities of bears aren’t constantly heuristically improving. *(THANK. GOD.)

There’s a tradeoff between ease of use and security, Apple doesn’t sell security devices.
They sell cell phones that link to hundreds/thousands of third party repos constantly.

Calling a cell phone secure is putting a raw salmon in your tent and expecting the best.

David Leppik July 18, 2018 4:05 PM

@Alan:

Except that Apple did design it to be secure. They even hired an MIT security bigwig who invented One Laptop Per Child’s passwordless security model. Apple explicitly didn’t want a general-purpose internet-connected computer in a cellphone, since cellphones don’t work if they don’t know your location and phone number.

That’s why apps on iPhones (and later Android) need to declare what resources they use in a manifest. That’s why apps don’t have direct access to the filesystem. That’s why it’s so hard to replace the OS on an iPhone or install software that hasn’t been reviewed by Apple. That’s why programs aren’t allowed to mark memory as executable (as is needed to write a fast JavaScript interpreter.)

Apple gets a lot of grief for locking down iOS so hard, especially when it directly helps their bottom line, but the truth is that Apple could have designed it based on the 2008 Mac (or Newton), and decided not to. They’ve consistently pointed at security as the reason.

Security is hard.

Hmm July 18, 2018 4:14 PM

https://www.engadget.com/2018/07/18/apple-icloud-data-china-stored-state-run-telco/

We’re tripping about the padlock being trivially overridden, again and again and again.
Yet they’re leaving the barn doors open to China’s government for 1/3 of their operations.

Since they’re WILLING to do that for China, what are they secretly willing to do elsewhere?

Not to mention, how good could their security be if a race condition @input defeats the countdown?
That’s like the first thing you would fuzz for, right? Or am I way off base?

David Leppik July 18, 2018 4:27 PM

One of the things that makes this particularly difficult is that some users need assistive technologies (e.g. special keypads) to unlock their phones. I don’t know much about these, but I do know that assistive technologies in general tend to be behind the times, especially if they are FDA approved.

Apple’s iOS security model in general is to bind the sensors to the security chip so that the thumbprint detector can’t be replaced without throwing away the secure data.

So if you need to use an insecure USB-based device for unlocking your iPad, but your iPad won’t talk to USB until it’s unlocked, you’re in trouble.

I imagine this would be especially hard on someone who uses an iPad-based speech synthesis app to talk, since they would have trouble telling someone what’s wrong.

Apple is trying to push out a change that will make things more secure for 99.99% of their users, and won’t even inconvenience 90% of the users, but could be terrible for the 0.01% if they get it wrong.

0xdeadbeef July 18, 2018 4:50 PM

Calm down everyone, this is a beta release. Let’s not throw the baby out with the bath water before the baby has even been conceived.

This is why software should be tested publicly before gold master releases.

albert July 18, 2018 5:42 PM

@Hmm,

“…Not to mention, how good could their security be if a race condition @input defeats the countdown?…”

That could be a marketing decision. Who knows who, within Apple, argued against it?

Apple could have said, “Unless you have an Apple-approved device, you’re gonna have to enter the password every time you plug it in.”

There would probably -still- be a way around that:)

That’s why my security company in Poland is called: “Uderzyć Kret Bezpieczeństwo Komputerów sp.p.”*
..

@0xdeadbeef,
“…This is why software should be tested publicly…”
Yeah, the woman with the bicycle might have something to say about that,….if she was still around.


  • I hope it’s not necessary to explain this.
    . .. . .. — ….

no@no.com July 18, 2018 6:25 PM

“Apple Computer has one of the best security teams on the planet”.

Really? What is the evidence for this? Basic threat modelling would have picked up this threat. Assumptions of people using “trusted clients/things” is security 101.

The team that finally had something to do after Apple were named partners in the Snowden leaks?

I’m very skeptical of Apple’s motivation.

Billbo July 18, 2018 11:42 PM

This is another reason why I would prefer to use algorithms/implementations which have been vetted by as many individuals/organizations as possible. Even better would be if they were antagonistic. Any algorithm/implementation which has been vetted by the US, UK, Israeli, Russian, Chinese, etc. intelligence agencies for use by their own organizations would have to be good. Right????

Weather July 19, 2018 1:29 AM

Aes has 50 years time frame, with data at rest but based on gut they should re fix it, why did you block my posts to a Australian university, that was part of the contract ????

Bruce Schneier July 19, 2018 4:37 AM

@billmarker

“So…don’t trust systems designed by experts?”

Almost. Don’t trust systems, period. Or, more precisely, don’t trust the security of complex socio-technicalsystems. Systems designed by experts will be a lot better, but they won’t be perfect.

I tend to trust systems designed by experts that withstand the test of time.

wiredog July 19, 2018 6:41 AM

The questions to remember are: Who are we trying to be secure from? A Five Eyes country? The local police? J. Random Hacker who bought a stolen phone? Or our inquisitive kid brother?

How long are we trying to be secure? Forever? Months to years? A couple of weeks? Or until we get home and initiate a remote wipe?

If your answers are “Against the government” and “forever” then don’t put the information on a device that they can easily find, seize, and cart out the door. Likewise if you want to keep the data safe from the local police for more than a couple of weeks. Any other case the iPhone or Pixel is probably Good Enough, given the benefits of using it.

CallMeLateForSupper July 19, 2018 8:32 AM

@jerry 55
“This post is what we regard as a team-killer, elitist mentality that destroys productivity.”

Your post seems to be meant for the anti-net-neutrality forum. That is located about three miles down the hall and to the right. You’re welcome. 🙂

John July 19, 2018 10:32 AM

I’ve been following Apple exploits for some time now, I am confused by the assertion “best security team on the planet”, as they clearly are not.

echo July 19, 2018 6:56 PM

If people with security needs find iPhones too expensive a cheaper solution might be to buy a broken iPhone off EBay (good luck trying to get this to work let alone break into) and a really cheap generic slab of plastic phone as their real phone.

Clive Robinson July 19, 2018 11:49 PM

@ billbo,

Any algorithm/implementation which has been vetted by the US, UK, Israeli, Russian, Chinese, etc. intelligence agencies for use by their own organizations would have to be good. Right????

Wrong…

Whilst an algorithm can be assessed by mathematicians and cryptographers as being “algorithmicly secure” that is just the tip of the ice berge. You still have many other security considerations to go through before you even get remotely close to a secure system.

For instance it is possible to design a cryprographic system that has both strong keys and weak keys, either deliberately or accidently[1].

If you are aware of this design issue but run a central key issue system. Then you can select only the strong keys to use[2].

However if you adversary is not as mathmaticaly or cryptographicly skilled as you are, they might aquire your crypto system and copy it. If they are unaware of the weak keys then they will use the weak keys and information will become available to you.

If you study the history of the British attacks on thr German Enigma system you will see how the breaking of a single message provided meta-data that assisted with the breaking of other messages. And much like the single loose thread on a knitted jumper, pulling on the single loose thread unraveled the system.

Thus if you trust those people you mention they could conspire against you in some way to provide you with an algorithm or system that is “back-doored”.

This was the issue behind not just the NSA assisting NIST on the random number specification to weaken it thus causing the specification to be withdrawn. But prior to that the NSA putting the fix in on the rules for the NIST AES competition. Which ensured weak implementations riddled with timing side channels would be released onto the US public and just about everybody else as well.

Sadly there are still very many implementations of AES out there that leak information that alow recovery of both key and plaintext. Which is why for many years now I’ve advised people to use AES or any standard algorithm implementation “Off Line”[3] in a way that minimizes such leakage opportunities.

There are a couple of sayings that you should remember firstly “Beware Greeks bearing gifts” and the more common inverse meaning of “Never look a gift horse in the mouth”.

[1] Many mechanical cipher systems had this problem, yet they were still put into the field for use by both the knowing and unknowing.

[2] The Data Encryprion Standard (DES) had a half dozen “weak keys” a list of which was published with the standard so that users would not use them.

[3] The term “Off Line” has gone through many expansions, contractions and changes over the years. Originally it was about typing up a message on a terminal to “punched paper tape”. That is you could use such a terminal whilst not connected to the communication line (Off Line) to compose, edit or revise a message prior to going “On Line” to send it in it’s finnal form. As it also reduced the cost of sending messages[4] as well, Off Line mode usage was quite common. The cost reduction was even more pronounced when in the 1970’s “Time Share Computing” on dial up lines with modems became popular.

[4] For various reasons both technical and profit, phone companies charged for the use of telephone and telex lines by “connection time” multiplied by a distance cost. As both prices were a considerable fraction of the average income through till the 1970’s doing as much of the work Off-Line saved money.

ismar July 20, 2018 6:37 AM

As anything else in life security is best seen in terms of degrees of probability than in binary terms of black and white. It is the constant struggle to improve it and make it more probable that makes the life interesting.
Let’s help Apple keep doing this rather than engage into less constructive polemic about personal agendas

Glen July 21, 2018 9:05 AM

So many comments about security in general. So few comments about the actual post about USB restricted mode.

No, Bruce Schneier, that company cannot “easily bypass” USB restricted mode. Once enabled, USB restricted mode CANNOT be bypassed—as far as anyone has shown—short of entering the correct passcode. All this company has demonstrated is a method for bypassing the AUTOMATIC ACTIVATION of USB restricted mode, which is a significantly different claim. Even the link you provided points that out. It’s an important distinction.

Apple has, in fact, made the iPhone surprisingly secure as evidenced by the great and expensive lengths government and law enforcement have gone to in attempts to circumvent it.

bttb July 27, 2018 2:33 PM

Found on Matthew Greene’s, @matthew_d_green, Twitter history (July 26): https://www.forbes.com/sites/thomasbrewster/2018/07/26/apple-ios-security-boost-not-stopping-cops-hacking-iphones/#22fbf3f87129

“Apple Vs. GrayKey: Leaked Emails Expose The Fight For Your iPhone Privacy

In the fight over digital privacy, Apple is forever adding layer on layer of security to its iPhones. For most users, Apple’s approach is a great boon, keeping all their information away from thieves and hackers. But for America’s cops, it’s causing a headache, preventing them getting into iPhones where they could find valuable and timely information. That’s why police are increasingly turning to private contractors like GrayShift, which Forbes uncloaked earlier this year as it promised to hack its way into the latest Apple cellphones.

Is either side winning? From emails leaked to Forbes, and in conversations with police officials, it would appear on first glance that Apple’s latest updates to its iOS operating system truly have stymied the cops and their GrayKeys. But, at the same time, police still have a way to hack into iPhones, even the latest models, the emails show. The messages were shared by an anonymous source who had access to a private email list subscribed to by members of the police and digital forensics communities….”

AreWeSecure? August 16, 2018 3:36 PM

Two issues no one has raised:

1-Engaging emergency SOS mode on iPhone and immediately canceling now turns on Restricted Mode immediately and deactivates TouchID and FaceID, requiring a password. And of course being required to divulge a password isn’t the same as being required to activate FaceID or TouchID. So Restricted Mode can now be turned on at will on iPhones. It will be interesting to see whether Apple enables something similar for iPads. But for iPhones this defeats Elcomsoft’s trick.

2-Interesting that no one has commented on whether pair locked iOS devices are vulnerable to Cellebrite, GrayKey, or any other forensic method. Given who publicized the method and where he now works, a good guess would be no, since a pair locked device won’t connect to any device except the one that pair locked it, even if it’s unlocked by passcode, TouchID, or FaceID. But you better not be traveling with that other device.

Bradley Ross November 12, 2018 10:26 AM

I have seen a number of comments that Apple iOS is now safe from Cellebrite and Graykey, and also a number of comments that Cellebrite and Graykey can get around Apple security. Does anyone know if the exploits work with the latest releases of the iOS operating system? I personally find the items from Apple disturbing, as it is unclear if the underlying problem has been completely fixed.

https://bradleyaross.wordpress.com/2018/08/27/failure-by-apple-to-stop-iphone-unlock-exploit/ is a blog post I wrote describing my opinion on the exploit. The biggest problem is that it should be impossible to try a password without incrementing the counter. As long as that is possible, I have doubts whether any patches will be effective.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.