Brute-Forcing iPhone PINs

This is a clever attack, using a black box that attaches to the iPhone via USB:

As you know, an iPhone keeps a count of how many wrong PINs have been entered, in case you have turned on the Erase Data option on the Settings | Touch ID & Passcode screen.

That’s a highly-recommended option, because it wipes your device after 10 passcode mistakes.

Even if you only set a 4-digit PIN, that gives a crook who steals your phone just a 10 in 10,000 chance, or 0.1%, of guessing your unlock code in time.

But this Black Box has a trick up its cable.

Apparently, the device uses a light sensor to work out, from the change in screen intensity, when it has got the right PIN.

In other words, it also knows when it gets the PIN wrong, as it will most of the time, so it can kill the power to your iPhone when that happens.

And the power-down happens quickly enough (it seems you need to open up the iPhone and bypass the battery so you can power the device entirely via the USB cable) that your iPhone doesn’t have time to subtract one from the “PIN guesses remaining” counter stored on the device.

Because every set of wrong guesses requires a reboot, the process takes about five days. Still, a very clever attack.

More details.

Posted on March 30, 2015 at 6:47 AM44 Comments


xxx March 30, 2015 7:06 AM

Seems the fix should be simple: decrease the counter before PIN entry, then reset it back to 10 after the PIN was entered successfully.

Maarten Bodewes March 30, 2015 7:08 AM

OK, but this shows a blatant error with regards to the PIN code handling. Every security professional knows that you should decrement first, then check for the correct PIN.

It seems that Apple again places convenience before security (at least if I understand the use case correctly, where the screen lights up before the actual unlock happens).

arfnarf March 30, 2015 7:08 AM

The right way to do it:
Enter PIN
Decrement counter
Check valid PIN

The wrong way to do it:
Enter PIN
Check valid PIN
Decrement counter

Clive Robinson March 30, 2015 7:54 AM

Another example of why getting security right is actually quite hard.

In this case I suspect it is caused by just the way the programers concerned think. That is the more natural “test for action, then take action” rather than the less intuative and certainly more clumsy “take action, then test if you should have taken the action”.

One thing people often find dificult to get to grips with is that security sometimes requires the horse to push rather than pull the cart. An example from history, is you put armour between you and the threat you are approaching thus whilst pushing is harder than pulling it’s a lot lot safer…

Patrick Jarrold March 30, 2015 8:39 AM

With a 12-digit passcode, this method would take more than a million years to try all possibilities. Including lowercase letters in a passcode of that length increases the time rquired to over 6 billion years. Only using 8 digits would still take more than 100 years.

Igor March 30, 2015 9:37 AM

I suppose this attack wouldn’t work on Voiceover users who incidentally have their screen brightness set to zero. But then again, the set of users that would fit this particular use case are in the minority.

NobodySpecial March 30, 2015 10:42 AM

@Patrick – entering a 12 digit pin everytime you want to answer a call or pause an MP3 might be a little inconvenient.

I have a password generator (outputs sha512 of a secret+salt) for my online banking. Went into the branch and was asked to enter my password on their machine – took a little while and got some very odd looks when it took 10minutes and several goes to get the 64character result typed correctly.

Anon March 30, 2015 11:06 AM

Isn’t this completely pointless for someone with this level of access to the hardware? Wouldn’t it be simpler to simply rip out the SSD or whatever they use, and write your own pin guessing code that does not contain any number of max guesses or throttling? That way you can get at the data in seconds.

JDM March 30, 2015 11:25 AM

“Isn’t this completely pointless for someone with this level of access to the hardware?”

It would actually be very handy for the market in used iPhones. Many are password locked – password forgotten or never gotten from a previous owner, even if it’s a legit sale. You can buy these for a hundred to several hundred dollars less than one which isn’t password locked. Sellers could do this procedure and make a lot more money on a sale.

albert March 30, 2015 11:29 AM

It’s obvious to everyone….now. The question I have is: Why didn’t Apple think of this? The fix is trivial.
Folks getting paid to do security aren’t doing their jobs. 911 was a prime example.
The good guys need to think of these things first and take action. Reactive agencies will always be losers. Let’s focus on prevention, and less on punishment. How does one punish a suicide bomber?
“…has a trick up its cable….” Good one:)

Wael March 30, 2015 11:30 AM


Isn’t this completely pointless for someone with this level of access to the hardware?

Not if the intent is to reset the password, root the (stolen/found) phone and sell it for a “”profit”. An attacker may or may not (not the seller’s problem) also utilize mechanisms to defeat “Find my phone” or “Disable the phone”, they wouldn’t care about “Wipe the phone”; it’s a free factory reset…

Wael March 30, 2015 11:37 AM


How does one punish a suicide bomber?

Wouldn’t “How does one remove the conditions that create suicide bombers” be a more worthy question to ask? An ounce of prevention is worth a truckload of… cure

Anura March 30, 2015 12:03 PM


The iPhone uses an AES key stored in hardware that is not accessible externally – I’m not sure of the details, whether it’s a TPM or “burned into the CPU” as some articles indicate – so you can’t just rip out the storage and break it externally. Now, you might be able to modify the hardware to just prevent writes to the flash memory, unless it is part of a TPM itself and then you would need much more sophisticated technology to be able to break it. I know there was a paper about recovering keys from a TPM using an electron microscope and a probe, but not everyone has access to that kind of equipment.

Nick P March 30, 2015 12:05 PM

@ Wael

You want to remove the “human condition?” That’s genocide at a global scale!

@ all

re attack

The attack is clever… outside the smartcard industry. Their threat model assumes power might be cut off during any computation. They mitigate against this for reliability and security in their designs. Seems the rest of industry could use a boot camp from smart card designers on how to handle basic physical and side channel attacks. There’s significant overlap between security issues of mobile and smartcard SOC’s. Things will get interesting when people hitting smart cards notice this, grab some Apple/Samsung SOC’s, and start applying their existing attack strategies to unprepared targets.

Funny part is that Apple can afford to straight up buy one of the top smartcard and security IC vendors. Then, they could freely use their intellectual property in all their designs. They could also direct the acquired company to make IC’s that work with their offerings better than anyone else’s. Apple’s billions in profit and ARM license makes their security failures less excusable than most. They’re one of the few that could straight up build a clean slate chip and be done with most security issues.

Wael March 30, 2015 12:11 PM


I don’t believe iPhones or any of Apple’s computers currently use TPMs. The laptops had a TPM a few years back but was subsequently removed… There is a really good paper published by Apple explaining their security mechanisms (unusual for secretive Apple to publish such a paper.) I’ll post it later…

Jolene Johnson March 30, 2015 12:12 PM

I was just reading a similar article about this same thing, our false sense of security is just that. What we believe to be secure, someone somewhere has already figured out a way to circumvent it.

Wael March 30, 2015 12:18 PM

@Nick P,

That’s genocide at a global scale!

I don’t want to derail the subject, but keep a note to elaborate on your response when the right thread arises in the future. I’ll remind you.

Damien March 30, 2015 12:21 PM

As I understand it, this is an attack on just the data. I think that getting a blank device doesn’t require anything this sophisticated. “About five days” is a fair amount of time to spend to access one person’s data on a small flash device. The victim certainly has enough time to mitigate risks to any network data not currently cached on the device. Though, this is more evidence for the rather obvious conclusion that a 4 digit pass code is insufficient as a security factor. I’ll bet that in many cases where this level of effort would be warranted, that five days can be dramatically shortened with the intelligent use of the victim’s personal details (SS#, special dates, etc.) in the guessing process.

Flash writes are slow. I wouldn’t be surprised if this was essentially an Apple design decision grounded in a requirements for a snappy UI response. This certainly is a creative attack.

Nick P March 30, 2015 12:40 PM

@ Jolene

re reason for pervasive insecurity

The trick is that we’ve known the bare minimum to achieve security in our devices and software for over a decade. There’s even official criteria* for the software side. There’s many papers detailing attacks and issues in the hardware side. The reason the security engineering doesn’t happen is because there’s no demand: people won’t pay extra or sacrifice convenience for real security. It’s rare a company offering a more secure product avoids bankruptcy.

Additionally, the extra techniques eat at the profit of the supplier. That means they won’t supply unless (a) they’re paid extra to cover the costs or (b) they’re forced to add safety/security to participate in the market. The lack of demand eliminates (a) except in tiny, niche markets. The Computer Security Initiative and FAA’s DO-178B process are examples of regulation (b) that was successful. The CSI no longer applies and safety-critical certifications are themselves limited.

So, the evidence indicates that only incentive-driven regulation that’s applied pervasively will change our INFOSEC situation. The evidence also indicates that the market will never produce or adopt secure systems on a large scale. Those of us wanting INFOSEC therefore must make pretty severe tradeoffs in terms of usability, features, and cost. The solution will therefore require the action of lawmakers. I’m not holding my breath on that one.

*Note: Things start getting difficult to attack at EAL5+ and maybe secure at EAL6-7. Most OS’s, databases, firewalls, and so on are certified to EAL2-4. That means they’re not only insecure: they’ve been formally certified as insecure by a third party. Makes it funny when they brag about their certifications to customers. Makes it sad when most of those customers buy it with extra confidence. (sighs)

DB March 30, 2015 2:05 PM

  1. If login-tries counter is too high, wipe data (user configurable)
  2. Force an exponentially-longer wait, based on login-tries counter
  3. Let user enter PIN
  4. Increment login-tries counter (stored in non-volatile memory)
  5. Check valid PIN
  6. If not successful: loop to #1
  7. If successful: reset login-tries counter and login

The programmer error is in thinking of it as a failure counter instead of a login-try counter. You’re not counting failures, you’re counting tries… all tries… both successful and unsuccessful. Just you’re resetting the count after success. And it needs to increment, not decrement, so that you can keep the exponentially-longer wait going.

Also #1 and #2 should be first, before even allowing the PIN entry… you have to do your security related stuff regardless of whether you just came up from a power outage or not. And you need #1 and #2 because they compliment each other, the wait is there for people who really don’t want to let enemies easily wipe it, and the wait should not be configurable, it should just always be there as a matter of normal security policy.

@ Damien

You could be right that this is prioritizing snappy UI response over security… But if true, then I’d consider that kind of deliberate security-bungling to be much worse than simple incompetence.

albert March 30, 2015 4:10 PM

I did mention prevention. I would say “An ounce of prevention is worth a shipload of cure”:)
No one wants to talk about ‘prevention’, as it’s contrary to US foreign policy.

DB March 30, 2015 4:33 PM

There’s no money in prevention, the money is in the massive cleanup.

Never mind how short sighted this may be, you have to think about it from the other’s perspective. A company’s goal is not to create more efficiency in other competing companies, but to extract as much money as possible. You do this better by simply watching the other company fall, or maybe erecting a few more barriers for them to trip over, or even giving them a little push too… This is the way business works, it’s not egalitarian. It’s big business in wars, poverty, starvation, slavery, etc…

If you want the opposite, you have to actively fight the natural tendency. Just beware it’s an uphill battle all the way, and you will not be the most powerful dominating force.

albert March 30, 2015 4:36 PM

@Nick P
“…Funny part is that Apple can afford to straight up buy one of the top smartcard and security IC vendors….” etc.
This is a logical first step. If you own it, you control it. Not effective when the Security State comes aknockin’ with ‘court’ orders, but a giant step, nonetheless.
While I’m wishing in one hand, other giant steps I’d like to see are:

  1. A consortium of companies like Google, Apple, Microsoft, etc. (whose businesses are not based on providing internet infrastructure – like AT&T), to provide low-cost fiber for customers.
  2. A single, industry-funded non-profit organization to clean up the CA mess.
    It doesn’t look like we’re going to get away from the SS anytime soon, but industry has the money and influence to take some action.
    I gotta go…

name.withheld.for.obvious.reasons March 30, 2015 6:31 PM

@ Nick P

So, the evidence indicates that only incentive-driven regulation that’s applied pervasively will change our INFOSEC situation. The evidence also indicates that the market will *never* produce or adopt secure systems on a large scale. Those of us wanting INFOSEC therefore must make pretty severe tradeoffs in terms of usability, features, and cost. The solution will therefore require the action of lawmakers. I’m not holding my breath on that one.

I suggest that something else sits at the bottom of the current environment relative to the lack of robust systems, PLAUSIBLE DENY-ABILITY. In the commercial software environment there is an implicit constraint on manufacturer defect liability. Any End User License Agreement is encapsulated in language that confers liability regarding the use of the manufacturer’s product onto the purchaser/customer/victim. Essentially the manufacturer states that the product cannot be warranted to be free of defects….

This alone allows production decisions that are not in the best interest of the customer/victim.

If “cyber-insurance” takes off it will only be a matter of time when underwriters figure out that companies may not be exercising due care regarding the quality of the manufacturer’s product.

Thoth March 30, 2015 6:55 PM

What it requires is a set of passive booby trap PINs randomly distributed across a set of numbers. If those passive booby trap PINs are activated, you know someone is bruteforcing and the phone would immediately harden itself and a three unlocking PIN or correct PIN must be entered where the counter instead of the normal “transient counter” would use a
“stored counter” so that three unsuccessful tries even if power rebooted to attempt to unlock the device would cause the device to be wiped or some remedial actions taken.

Thoth March 30, 2015 6:58 PM

Probably a modification to the above I wrote is to not only store the wrong try PINs into a stored counter but also to decrement the number before doing any GUI display whatsoever
and if the device is using a power analysis, the algorithm must be able to scramble the logic so it is unable to tell from the if-else loops and power analysis.

Anura March 30, 2015 7:08 PM

My guess is that they are decrementing before writing to the display, but they are not closing the file handle until afterwards so it stays in the buffer without getting written.

Anura March 30, 2015 7:09 PM

For clarification, I guess that because it would be really strange looking code otherwise.

Dirk Praet March 30, 2015 7:16 PM


This is CVE-2014-4461. It was fixed back in iOS 8.1.1, in November.

Are you sure? The guys at MDSec themselves seem to think it’s CVE-2014-4451, which Apple claims was fixed in 8.1.1 . But it’s probably better to await formal confirmation that the Blackbox attack indeed no longer works as from 8.1.1 than just taking their word for it.

Over the weekend, I was going through a quite interesting SyScan 2015 presentation by Stefan Esser, titled “iOS 678 Security – A study in Fail“. Turns out that Apple seems to be suffering too from sloppy security patches for vulnerabilities that need to be fixed over and over again (remember the M/S Stuxnet patch). This seems to apply especially to jailbreak persistency (Pangu/TaiG) leaving the entire system open to full pwnage for extended periods of time (3 months to 2 years).

@ NobodySpecial, @ Patrick Jarrold

entering a 12 digit pin everytime you want to answer a call or pause an MP3 might be a little inconvenient.

Which brings us back to the classic security v. convenience debate. But even a strong 8-character password (alphanumeric, upper/lower case, special characters etc.) for this specific attack would already make a world of difference. Ditto for commercial products like Elcomsoft’s iOS Forensic Toolkit. I actually don’t understand that Apple still even allows 4-digit pins for any other purpose than catering for LE and owners suffering from Alzheimer’s.

@ Thoth

… so that three unsuccessful tries even if power rebooted to attempt to unlock the device would cause the device to be wiped or some remedial actions taken.

Caution is advised with this sort of approach as it allows for easy DOS-attacks. You leave your phone on the table for a minute, and one of your drunk buddies has wiped everything because he wanted to play Angry Birds. By the time you get home, you realize your last backup was from two years ago and you never bothered to correctly configure iTunes syncing either.

Thoth March 30, 2015 7:47 PM

@Dirk Praet
If someone is willing to put their phones on the desk, they really need some kind of wake up call. Better sad and wiped than sacked and hacked.

DB March 30, 2015 8:30 PM

If some dude looking at your phone is going to get you sacked, you’d better have very beefy security on that phone….

On the other hand, if you REALLY have little of any importance on there, then maybe you really could go for greater convenience.

It’s your call, it’s your job on the line… (or not) 🙂

Seriously, this is the things people should be weighing especially with physical security of devices.

However, NETWORK security is a different matter, because it has wider reach. Every criminal in the world can reach every always-connected device all the time, just by guessing its IP address! You ABSOLUTELY MUST have good security there, to withstand every worst attack there is out there, or the device is basically useless to you. This is why the possibility of deliberate backdoors and sabotage to BIOSes and firmwares and things allowing remote access is so much more aggravating than an unlock screen.

Nick P March 30, 2015 9:55 PM

@ name.withheld

True. That helps a lot. Remember, though, that many manufacturers were building more secure devices as far back as the 60’s. Many high reliability systems showed up in the 70’s and 80’s. Yet, which did the market decide on? You guessed it: the ones that came with the enhanced, intellectual-property dissemination option. Your own tales of fighting to get project managers to care about security in a security-critical sector support my assertion of demand-side issue.

If anything, what we see with the EULA’s is a parallel problem that appears to have showed up as computing became a commodity. The incentive was to get the hardware and software out the door economically rather than robustly. Even people in high assurance caved in. Bill Gates particularly pioneered the technique of selling cheap hardware with licensed, backward-compatible software. Others did it with different hardware and software. Once lock-in strategy was working, they developed much of what was in the EULA’s as an externality that customers would be stuck with.

@ All

What do you think about the SOC adding a sensor to detect repeated power-offs with nonvolatile memory that integrates with security software? The idea is, like with smart cards, certain types of events signaling attack would be monitored. The software, before messing with password, would send a command to the monitoring system. It updates its internal state that a password attempt is happening. It checks for records of many attempts or strange behavior. If results are reasonable, it tells the system to check the password. If results are anomalous, it tells the software to refuse password entry and switch to a fail-safe mode. Such a mechanism can support arbitrary, security-critical code along with numerous types of attack or device failures. Fairly simple, too.


Buck March 30, 2015 10:46 PM

@Nick P

What do you think about the SOC adding a sensor to detect repeated power-offs with nonvolatile memory that integrates with security software?

It sounds like a very expensive and ultimately useless endeavour to me… Sure, it might catch a few folks off guard initially, but then everyone else will simply ‘randomize’ their power-offs – just enough to defeat your new security measure. 😉

Clive Robinson March 30, 2015 11:00 PM

@ Albert,

This is a logical first step. If you own it, you control it. Not effective when the Security State comes aknockin’ with ‘court’ orders, but a giant step, nonetheless.

Err not quite thecase have a read of this article,

Towards the bottom you will find an interesting comment about a privately developed process that the US Gov clasified from the get go.

The important note is that what they can do once they can do endlessly. The current POTUS is a control freek with no real legacy, overclasifing cyber activity is a pet distraction he has that his opponents agree with, thus expect more draconian computer/cyber legislation making your secrets their secrets so you can not put them into products etc. Thus compleately subverting cyber security to the benifit of the likes of the NSA, FBI, et al, oh and also every tinpot dictator around the world…

Figureitout March 31, 2015 12:08 AM

Nick P
–Well…lol, make such an idea and more importantly implementation reliable, and yeah, sure. Need more details (if there’s any connection whatsoever between the memories, it’s a risk). Of course, you can see holes still there eh? That’s why making something “bulletproof” is so hard when you can’t trust components. Gets back to what computer are you going to use to program it? Got support for Coreboot -> OpenBSD or TAILS?–Nope? Xcode it is! (I can’t deal w/ the bouncy annoying UI of apple, maybe someone else can).

Also, something about the security market, plenty of people here love to call out the market making “military grade” and “ultimate SECURITY FOR YOU” marketing claims for products. People probably think it’s all bullsh*t, and thus paying the extra is just a profit game. No doubt it’ll bite us in the ass when the ultimate hack takes down virtually all networked computers, some day…

Clive Robinson March 31, 2015 12:32 AM

@ Nick P,

What you are describing is an anti-tamper device, which unfortunatly historically have a poor record (see the number of HSM trips/traps that can be bypassed).

The earliest anti-tamper devices I’m aware of are in locks, and they ended up being bypassed or worse made the lock unreliable to the owner.

Whilst software it’s self will not suffer from “mechanical bind” it will only act on what the programers think the sensors say, and history shows that carefull use of freezing, over/under voltage, strong magnetic or EM fields can push sensors into nonlinear, nonsensing or false sensing modes programers have either not predicted or can not use reliably or at all.

But extra software, even with the best will in the world is going to increase the potential attack surface.

The other thing to consider is that in this case it is a “negative benifit”[1] it does not add to the users day to day activities, only detracts if triggered. Further the triggering is as others have noted more likely to be due to somebody wanting access to Angry Birds not confidential data.

But also user life style issues creep in, the sensors can not tell the difference between a three year old having a hissy fit and a heavy handed LEO… In normal high security work three year olds “don’t get close” to secure devices thus the point is mute, but your mobile phone will in normal use “meet the family”.

The problem is that even though when it comes to data anti-tamper can be a “positive benifit” as for banknotes[2], for most users the value of data is unique to them thus not replacable –unless they have made appropriate backup arangments– or the holders of the value –Entertainment industry– will not redeem the value of lost data.

[1] Anti-tamper systems are not always “negative benifit”, for instance in deployed munitions such as bombs they will have a “positive benifit” from the designers point of view, in that they will disproportianatly kill bomb disposal personnel.

[2] Dispraportionate cost can also make anti tamper devices “positive benifit”, for instance, ink canisters in cash draws and storage in transportation/ATM devices, where inking the notes makes them valueless to attackers, but only incure a small cost getting them exchanged for the legitimate owners or their insurers. Accidental triggering has a minor direct cost compared to a large indirect saving in insurance costs and less robberies and their attendant costs.

Thoth March 31, 2015 6:35 AM

@Clive Robinson, Nick P
Most HSMs are simply moderately secure and as I have pointed out many times in the past they are not as secure as advertised. Ross Anderson and his team did an extensive research regarding bypassing tamper traps and they are rather easy to bypass and their designs are a black box.

Backup capacitors or tamper batteries with keys loaded into transient memory would be much desirable but it would be a hassle to do key loading everytime you want to use a crypto-chip so for the convenience, you sacrifice security by leaving your plain keys in “tamper resistant register memory cells”. If a tamper battery or in-built energy reservoir/tamper capacitor were in place, the abrupt powering down of a device would trigger the surplus energy reservoir to complete it’s task within a limited time range which might have been useful but a tasklet can take a while though.

The problem with Apple’s PIN design is leaning more towards logic codes not handling the counter before visual display (which alerts the light sensor in the automated bruteforce hacking device).

albert March 31, 2015 10:56 AM

Of course the SS can do whatever it wants. Nuclear weapon development is still highly classified, but the day is fast approaching when such ‘classification’ will become completely useless. I suspect fission devices can be built by anyone with access to materials, right now. As the article pointed out, censoring is determined by who’s doing the talking.
There are thousands of classified ‘patents’. IIRC, they are mostly patent applications, which are destined never to see the light of day. Teslas entire collection of notes and journals was stolen by the FBI immediately after his death.
Congress is as useless as tits on a boar hog. They don’t know what they’re doing, and have no oversight of anything important.
What I’m looking for is a showdown between the SS and Big Business, namely the major players in IT. The whole economic system depends on IT security. The SS is the boy who cried wolf, and the sooner the corporatocracy realizes this, the sooner they can take action.
“Locks keep honest people honest.” When I was young, forgetting to lock the door was not a problem. When my dad was young, when summer vacation time came, they couldn’t find the house key!

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.