Is iPhone Security Really this Good?

Simson Garfinkel writes that the iPhone has such good security that the police can’t use it for forensics anymore:

Technologies the company has adopted protect Apple customers’ content so well that in many situations it’s impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.

“I can tell you from the Department of Justice perspective, if that drive is encrypted, you’re done,” Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data.”

Yes, I believe that full-disk encryption—whether Apple’s FileVault or Microsoft’s BitLocker (I don’t know what the iOS system is called)—is good; but its security is only as good as the user is at choosing a good password.

The iPhone always supported a PIN lock, but the PIN wasn’t a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn’t use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone’s address book, e-mail messages, and other information. But now, with Apple’s more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That’s good enough for most corporate secrets—and probably good enough for most criminals as well.

Leaving aside the user practice questions—my guess is that very few users, even those with something to hide, use a ten-digit PIN—could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: “There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files.”

Since then, I’ve learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It’s not the cryptography; it’s everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:

Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.

Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.

If, in fact, we’ve finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.

But I’m skeptical.

Another article.

Slashdot has a thread on the article.

EDITED TO ADD: More analysis. And Elcomsoft can crack iPhones.

Posted on August 21, 2012 at 1:42 PM65 Comments


Ben August 21, 2012 1:59 PM

According to an LE forensics acquaintance, there isn’t rate limiting or an attempt limit on the data interface to the iPhone. That interface is also much faster. As such, it’s unclear where Garfinkel gets his time estimate from on cracking the PIN. More importantly, the Garfinkel article seems to make iPhone sound like some super-secure platform, when in reality it’s just another OS with myriad poorly coded apps and users who don’t generally practice safe computing. He’s definitely gone over-the-top.

Duane Gran August 21, 2012 2:08 PM

Very timely article as I was just recently researching whole device encryption for mobile devices and the iPhone is clearly the leader in the range of effectiveness vs ease of use. Apple seems to have hit the right balance.

Android on the other hand is a bit of a mess. There are good apps to encrypt on a file by file basis but there is nothing as comprehensive as what Apple has done unless you get to the honeycomb release of Android and then the feature is buried pretty deep in the settings. I’m hoping this improves soon.

jdw August 21, 2012 2:19 PM

The quote from the guy at the justice department is probably more of a strategic position than a factual one.

It could partially be designed to lull criminals into a false sense of security (which is fine). But, call me cynical/paranoid if you want, it also looks like the stance the government would take preceding a renewed call for legal restrictions on encryption. “Oh we just can’t keep up with the criminals! We need new laws!”

We may be about to see a renewed call for government key escrow (Clipper chip 2.0?), or we may look for some law to the effect of equating refusal to decrypt with destroying evidence — meriting a jury instruction to assume the worst, possibly with additional obstruction penalties.

karrde August 21, 2012 2:20 PM

This may reflect a gap in understanding between the average (U.S.) Police agency and the average IT person tasked with recovering data from an iDroid-type device.

But I’m mostly guessing.

GTheGrey August 21, 2012 2:34 PM

This statement leaves me skeptical “This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN.”
I mean, it is an algorithm. It can run anywhere, unless it requires some secret code hardcoded into the chips machine itself, but even in this case it can be recovered from the machine and the algorithm can be executed in a billion times faster hardware cracking a 10 digit PIN (equal to a ridiculous amount of entropy) in real time.

alex August 21, 2012 2:45 PM

I agree with GTheGrey. Dump the flash and heute force it somewhere else er dos a Smartphone have a TPM?

Jens Alfke August 21, 2012 2:55 PM

Fraunhofer did some analysis and has a few white papers:

Pretty much everything is encrypted with either a “device key” (which is good against remote attacks, but not much protection against an attacker with physical access who can root the device), or a more secure user key that’s derived from the device key plus the user’s passcode.

Most user files default to encryption via the device key, although there’s a file attribute the app can set that will make a file use the user key. Keychain files, which store passwords and keys, default to encryption using the user key, making them pretty secure.

TheNextStep August 21, 2012 4:40 PM

will be what India and other countries did to the Blackberry…you can’t use it without providing the overlords with the keys. The real thing maybe that there just isn’t a back door.

NotThatOptimistic August 21, 2012 5:30 PM

Unfortunately, after working with LE agents for years, I can squarely attest that while the security mechanisms on iPhones will cause some strife for a normal person, they are no match for their specialized software.

We took the LE version of Cellebrite for a test using a personal iphone, pin locked and all security features configured (ios 5.1). He connected it, it immediately booted with a cellebrite logo on the iPhones screen, dumped the ROM, decrypted and he was able to hand me all the passwords, texts, and other data on the phone in about 30 minutes for a 16Gb model. While we didn’t test it, he also mentioned that many of the password protected data-safe applications out there take moments to crack, so storage inside a protected area isn’t terribly tough either.

The fact the boot screen was replaced showed to me that either Cellebrite completely owned Apples security, or, more likely, that they work with Apple to provide a complete forensic tool, as they are a reputable company and target law enforcement heavily.

Carl 'SAI' Mitchell August 21, 2012 6:31 PM

There are also different branches of any major government. Law enforcement generally won’t have the resources of a military SIGINT division. I doubt the FBI/DOJ can break Truecrypt’s AES implementation. I’m nowhere near as sure about the NSA or CIA. The same applies to the iPhone encryption, the FBI isn’t likely to be able to dedicate the resources to break it without Apple’s cooperation in placing a backdoor, the NSA are far more likely to already have a backdoor.

NobodySpecial August 21, 2012 8:06 PM

“I doubt the FBI/DOJ can break Truecrypt’s AES implementation. I’m nowhere near as sure about the NSA or CIA”

Traditionally the guarantee that the NSA/Secret Service/CIA/USAF/AAA haven’t broken it is that they use it.

Unless each of them have all
independently broken it AND managed to keep the fact secret, or they are working together in a spirit of collaboration – neither of which seems likely.

Andre Gironda August 21, 2012 8:36 PM

It’s possible to gain full access to the data protected store (note that the non-data-protected store which by default only includes mail through the default as well as attachments). This is easily accomplished with any DFU mode kernel (preferably one with the signed SHSH blobs), including on all of the new devices.

The primary issue with gaining access to the non-data-protected store on the 4S and iPad2/3 is that forensics companies don’t carry or offer support for these devices. This can be optionally challenged by using cycript, MobileSubstrate, or similar mechanism once DFU mode boots into a root-capable userland. The screen lock can be removed without knowledge of the PIN or extended passphrase. This will open access to non-data-protected store items in a similar way that emf_decryptor will, but with the additional advantage that the Objective-C runtime can be hooked. In this situation, many runtime string comparison operations can be read or modified in the live environment. These are too often fully revealing, such as the data-protected storage keys and sub-storage.

Worse, the DFU mode can be activated without holding down any buttons (e.g. Home + Sleep/Wake) via a known USB frame structure. Thus, it could be combined with juice-jacking. The attack could take anywhere from 2-10 minutes and data could be exfiltrated similarly.

Andre Gironda August 21, 2012 8:43 PM

This statement leaves me skeptical “This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN.”
I mean, it is an algorithm. It can run anywhere, unless it requires some secret code hardcoded into the chips machine itself, but even in this case it can be recovered from the machine and the algorithm can be executed in a billion times faster hardware cracking a 10 digit PIN (equal to a ridiculous amount of entropy) in real time.

As GTheGrey and NotThatOptimistic mention, it quite in fact is possible to create a dmg image of the entire filesystems and then brute force the PIN/passphrase offline.

This isn’t a backdoor. It’s a design and implementation problem.

@nonym0us August 21, 2012 11:13 PM

Think why British MPs and Queen use iPhone instead of Blackberry??? Someone, told them earlier about it… 🙂

But, reality is a bit different. Those who thinks iPhone security is uncompromisable are those who lives in a fools world.

Remember this quote,

“Out of the crooked timber of humanity no straight thing was ever made”.

However, Boeing is working on a secret project of untracable phones (source: undisclosable)…

Autolykos August 22, 2012 12:52 AM

Oh, how shockingly bad news for LE. They can’t just waltz in and grab everything not nailed down, they have to think about it beforehand (and maybe try to insert a Trojan, etc…). Now they are seriously screwed and need strict laws against cryptography. Oh, and harsh punishments, how could I forget about those? Because you don’t have anything to hide, citizen – or do you?

@Craig McQueen: Only if Slashdot is down now…

Jim Hillhouse August 22, 2012 1:08 AM

NotThatOptimistic, was the pin 4-digits or longer–or would that have even mattered–in the Cellebrite demo?

Gweihir August 22, 2012 2:10 AM

My take is that this is either a push from the intelligence community to make people feel safe and put their secrets into the phones so that the TLAs can actually get something from them or this is just plain wrong. (Leaving aside the obvious incompetence of conventional forensics staff. 25 years on the iPhone sounds like no more than a day on modern FPGAs, probably less. And almost nobody will have a 10 digit PIN. But even those that have one, that is just 36bit, and you cannot do any massive hash-iteration on a device as weak as an iPhone.)

To me, the second citation is the key and it can support both versions. It is obviously BS, but it carefully caters to some common misconceptions. The question is: Stupidity or intent? By Hanlon’s razor it would be incompetence, but I am not so sure. If people believe iPhones are insecure, they will not put any juicy secrets in there. If the truth really is that they are insecure, but it takes more than amateur-level effort to get in there, having people believe they are secure could yield a major intelligence source. And the police would not lose a lot, if I remember correctly, evidence from phones is massively overrated for purposes of law enforcement.

Gweihir August 22, 2012 2:24 AM

@Jim Hillhouse: I am wondering the same thing. 4 digits can be brute-forced fast on the device itself. 10 digits is a bit much even for a modern PC without special hardware (assuming some reasonable password hash iteration of course, say 1 second on the iPhone).

Clive Robinson August 22, 2012 3:32 AM

@ Autolykos,

Of Law Enforcment (LE),

Now they are seriously screwed and need strict laws agains cryptography. Oh, and harsh punishments, how could I forget about those? B

Almost the first thing I thought of after reading the article (and replying to @Nick P’s post ) is “this is a warm up act for a new Crypto Law dog and pony show from the FBI et al”.

Like Bruce I’ve come to the conclusion over many years that the hard part of using crypto is rarely ever talked about, and that’s as they say in the auto industry “The nut behind the wheel” driving it along.

Yes we have theoreticaly strong algorithms but take AES for instance even before the final round candidates had been selected the practical implementations you could down load (and nearly everybody did) were flawed and broken.

Put simply one of the “requirments” for the “refrence implementations” was “speed” so they were very optomised for this and this opened up a can of worms with “timing side channels” through which the key data hemorraged. IIRC Peter Gutmann developed the first practical attack via the iAx86 cache to be shown within weeks of the final candidate selection. You can read a very indepth paper about the AES cache timing issues at from back in 2005 which Bruce mentioned in Cryptogram. You can also have an interesting read over at Peter Gutman’s (2nd) home page,

Thus the issue is how do you get the works done on high in the ivory tower of theory down to the rocks dirt and soil of the users of “common clay” every day objects of human existance? You would think it would be easy if you look at a car, telephone or even pencil sadly not. In security unlike almost every other endevor everything realy does matter even if you don’t know why (and others won’t tell you). It is said there is many a slip twix cup and lips, and in most places a little spit and polish solves the spillage problem, not so in security where like poison even the tiniest spilt drop can kill inadvertantly.

So can we make a secure smart phone or computer, in theory yes but in practice we don’t yet know enough. And this is partly due to not having the ability to measure things in a meaningfull way such that we can make comparitive tests that have meaning.

Do I think Apple has done it, lets put it this way the odds are very very much against them and so are quite a few TLA’s (though some like the DoD are very interested in improving things). Is it “kid sister proof” well that depends on the kid sister and if she knowss where to look for info/tools. Is it LEO proof, yes and no it depends on the value that the crime has both financial and political, if it’s below some bar then they won’t expend the resources to get at the data, hence the idea from the UK’s RIPA where they just jail you till you can prove a negative, as it moves the cost from the LEO’s budget to that of the courts and prison service. As for the big budget Gov TLA’s and many corporates, not a chance it’s secure against these guys.

salach August 22, 2012 5:14 AM

Can such platforms as the iPhone or the rest of the smartphones considered secure when users download & install apps with hardly any limitations? Me thinks the answer is a big NO.
Encryption of the filesystem can help to some extent only against post-event investigations, not if you are a TLA or LEO known target.

qwertyuiop August 22, 2012 7:08 AM

This article plays to one of my biggest irritations which is quoting figures for how long it would take to crack a password of n characters in length (in this case a PIN of n digits). “…a nine-digit PIN would require 2.5 years, and a 10-digit PIN would take 25 years…”.

No it would NOT take 2.5 years or 25 years – it would take a MAXIMUM of 2.5 years or 25 years. In practice you wouldn’t have to check every password or PIN, you’d stop when you got to the correct one!

…end of pedantry

Johan August 22, 2012 7:45 AM

I really don’t understand people using short PINs. Well, I guess I do for the masses, but I bet everyone reading this has switched their phone/tablet to using the full ASCII keyboard and maxed out the number of characters.

I’m an Android guy. I use Moxie Marlinspike’s ‘WhisperCore’ encryption on my phone, which allows for a different pass phrase for full phone (including SD Card) encryption during boot, and a 16 character (the max for Android, ATM) lock screen pass that isn’t in a dictionary.

On my tablet, it’s 16 characters for the Jellybean encryption and lock screen (obviously, as they have to be the same, but I’ll check out that xda thread about making it even stronger).

It literally only takes about 4 seconds to enter 16 characters…I don’t get why people would choose a weak PIN method?


Nesetalis August 22, 2012 7:53 AM

This article has got to be bullshit.
unless they are going the route of say Iron Key and opening the device to extract the flash rom destroys it (and even that is skeptical.. i’m sure there are ways to extract it without destruction.) then 80ms per input is pointless garbage.
Only the kid sister would attempt to crack the encryption via the phone itself.

I don’t even bother encrypting my phone, nothing on it is sensitive, if I’m going somewhere private I don’t take it with me.

Captain Obvious August 22, 2012 8:20 AM

“you’d stop when you got to the correct one!”

In govt you might try them all just to be sure.

tOM Trottier August 22, 2012 1:45 PM

Of course, this covers only software hacks. What if the attacker starts taking the processor apart, layer by layer, to find the “burned-in” key. Or if Apple is lying about not recording what the key is and has a table listing serial-number and key?

Rambling Wreck August 22, 2012 3:30 PM

I think Clive hit the nail on the head. You need to have three levels: “kid sister proof”, “LEO proof”, and “TLA proof”. The moral of the story is that “LEO proof” may be less than “kid sister proof” depending upon who your kid sister is.

justin August 22, 2012 6:16 PM

This is a case where if it sounds Too Good To Be True, then it probably is. I mean, how likely do you think it is that in our modern, post-9/11/2001 world, a mainstream proprietary consumer device sold by the biggest corporation in the world would have “uncrackable” encryption without a back door for law enforcement and other “Three Letter Agencies”?

And how is “LEO proof” is any different from “TLA proof”? That wall fell down on September 11, 2001. Surely the local police will at least get an “anonymous tip” if a “TLA” happens to find anything interesting or undesirable on your phone.

It’s pointless to speculate on all this. Either somebody is snooping in your stuff or not, (and I’m pretty sure Apple has the ability to snoop on their own iPhones,) and it really doesn’t matter whether their purposes are to fight crime or commit their own crimes, to profile you for targeted advertising, or whatever other reason they might have to snoop; but some organizations and individuals have lot of resources to bear when they want to snoop.

Clive Robinson August 23, 2012 2:48 AM

@ Justin,

And how is “LEO proof” is any different from “TLA proof”

It’s different at the non iPhone end of the problem.

Most LEO’s are meaningfully resource limited that is in terms of manpower, technical resources and finances. Therefor the LEO has to decide how those resources are used and this generaly means for crimes without political / press interest, that are also below a certain financial value it is not cost effective to record them let alone investigate. As the value of the crime rises it becomes cost effective to record them and possibly group sufficient similar reports together to get some “detective time”.

This “resource bar” occurs in part because of the false assumption of “locality” being universal. That is to steal a tangible physical object the criminal has to be physical present to commit the crime thus cannot be in two places at the same time nore get from crime to crime instantly. It fails to consider with intangible non physical objects such as information the criminal can be in many different places almost simultaneously something that some criminals now exploit [1].

TLA’s tend not to make the “locality assumption” because essentialy they deal in intangible information as their stock in trade not tangible physical objects. They also know that collecting information from a single target after the event is very expensive. But importantly they also know the “economies of scale” apply, that is if you spend on the technical resources to investigate a single event the second event investigated comes almost for free if you suitably scale the technical resource and remove the “manpower resource”. Some TLA’s go a stage further and exploit the “near zero cost of duplication” of information and automate the process to be ubiquitous. The only problem to stop it becoming universal being the judicial process.

So TLA’s push via “National Security” argument for removal of judicial constraint (warrants etc) for “Universal Surveillance”. LEO’s however are not much interested in “Universal Surveillance” just zero cost access to information as and when they see fit so instead of pushing for removal of judicial constraint on investigation they push for zero cost (to the LEO) “compulsion”. That is for instance with the UK’s RIPA if you fail to reveal your “decryption key” “on demand” you can be locked up for several years unless you can do the almost impossible task of proving a negative (ie that you don’t have the decryption key) [2]. Thus the “compulsion” is do significant hard time or cough up the keys and do less time…

The problem is that once the required changes in legislation happen they become available to both the LEO’s and the TLA’s and you get squeezed from both sides into a rapidly diminishing area of privacy.

Oh and as for TLA’s tipping of LEO’s it rarely happens, partly for political “turf war” reasons but mainly because much TLA information is obtained extrajudiciously and thus considered “tainted or poisoned” and therefor usless for the LEO judicial process. Also as has been seen in “terrorist” cases where TLA intel has been used as a foundation of the investigation the defence goes after it like a rabid dog to get it put beyond judicial use the simplest method being the right “to face one’s accuser” in court to cross question etc. The TlA’s back off because it reveals “methods and sources” which become endangered or “burned”. The UK are still “ticked off” by the second “underpants bomber” plot because US agencies “burned” a very valuable asset for little or no return (a politicaly timed press release).

[1] The LEO “resource bar” based on the assumption of “locality” lets a lot of Internet and other crime get by under the radar, because the individual crimes have to lower value and the crimes are so widely spread out geographicaly they likewise don’t get grouped together, thus the criminal profits greatly from many individual crimes whilst LEO’s cannot apply the resources to properly investigate the individual crimes… Hence the idea of making “Internet Crime” a “Serious or Organised Crime” issue which means it gets taken away from frontline LEO’s and given to one of those “half way house” TLA’s like the US FBI or the UK’s doomed SOCA who are usually “hog tied” by other resource, legal or political issues.

[2] The easiest and most believable way of showing you don’t have the decryption key is to show the LEO’s destroyed it whilst in their control and thus beyond yours. If done via a widely recognisable method (batteries going flat) an argument can be made that the LEO was deliberatly incompetent for various reasons.

RichieB August 23, 2012 5:30 AM

The contents of most iOS 4/5 devices can be dumped in DFU mode, exposing a lot but not all content. This is not (yet) possible for the iPad 2/3 and iPhone 4S. The most interesting content (E-mail, passwords, etc) is protected by a Device Key and PIN. There is no known method to read the Device Key from the device. (Also, it is wiped when the iOS device is reset to factory defaults.)

This means that when a brute force attack is carried out outside of the iOS device, the full AES key needs to be recovered. When brute forcing on the device using the iOS APIs only the PIN needs to be cracked (because the APIs will supply the Device Key). If the PIN is sufficiently long, this will indeed take a long time and LE won’t be able to get to the encrypted data.

A brief overview is at

FYI: I actually use a very long iOS PIN, and no, I have nothing to hide.

getothechoppa August 23, 2012 8:43 AM

LE ran the same stories about blackberry devices, then surprise they had access all along. Remember India threating to ban Blackberries unless they were given the same backdoor access all western LE enjoys?

RichieB August 23, 2012 10:09 AM

@getothechoppa: do you have any proof to back up these statements? AFAIK Blackberry devices have an end-to-end encrypted tunnel to the Blackberry Enterprise Server which is typically placed on premise at the customer. RIM couldn’t give access to the contents of that data stream even if they wanted to.

Nick P August 23, 2012 1:19 PM

@ Bruce Schneier

You should be skeptical. Elcomsoft claims to extract the keys from these devices with their software, which is only for govt & TLA use. They make some pretty cool claims on the page below.

Elcomsoft iOS Forensics

Clive Robinson August 23, 2012 4:35 PM

@ RichieB,

AFAIK Blackberry devices have an end-to-end encrypted tunnel to the Blackberry Enterprise Server which is typically placed on premise at the customer.

That is only true of Blackberries where the business has an “Enterprise Server” where Blaackberries are used by small businesses or individuals the “server” is run by RIM.

A lot of details leaked out a year ago due to the London 2011 Summer Riots. It appears that the UK police got to know a lot of things very quickly, probably a lot faster than the usual “Warrant Process” would alow…

It would also appear that India was not interested in business users but individuals who might be breaking “purity rules” etc. Thus it would appear to be for privacy invasion not industrial espionage or looking for what we in the west would consider criminal activity.

RichieB August 24, 2012 4:26 AM

@Nick P: from the Elmcomsoft page:

[i]Elcomsoft iOS Forensic Toolkit can brute-force iOS 4 and iOS 5 passcodes in 20-40 minutes for a 4-digit passcode. Complex passcodes can be recovered, but require more time.[/i]

See the main article by Bruce: use a long enough passcode and it will take years to recover the data. Also:

[i]iPhone 4S, iPad 2 and the new iPad support is limited to jailbroken devices only.[/i]

Clive Robinson August 24, 2012 5:05 AM

@ RichieB,

“iPhone 4S, iPad 2 and the new iPad support is limited to jailbroken devices only.”

Hmm what an interesting statement that is.

Look at it this way LEO’s cannot “jailbreak” a device as that is “tampering with the evidence” and the phone becomes “fruit of the poisoned vine” and thus inadmisable in court (in theory).

But extra judicial (EJ) TLA’s who have no interest in going through the legal process will happily “jail break” a phone prior to gathering intel from it.

Thus the LEO’s need legislation to “compel” the owner to reveäaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, and the EJ-TLA’s need legislation or equivalent to ensure a “back door” for easy jailbreaking etc.

If I remember correctly the Communications Assistance for Law Enforcment Act (CALEA) has provision for such back doors depending on what is deamed as exchange and terminating equipment.

In the UK a phone that can do call transfer (which all mobiles/cells can do) is considered a combination of both exchange and terminating equipment thus under that view all mobiles/cells would be required under CALEA to have an LE back door. And If I remember Correctly that argument was made against Skype and other VoIP phones.

Clive Robinson August 24, 2012 5:26 AM

Hurumph Herumph (or however you spell it)

The phone has done it again 🙁

In my above the “aaa…” string should have read,

“reveal their pass code”.

I shall now put the phone down and walk away muttering almost inaudible curses and revive my humor with a nice cup of “Brownian Motion Generator” [1]

[1] If you don’t know what it is read,

Nick P August 24, 2012 10:32 AM

@ RichieB

Nice catch.

@ Clive Robinson

“Look at it this way LEO’s cannot “jailbreak” a device as that is “tampering with the evidence” and the phone becomes “fruit of the poisoned vine” and thus inadmisable in court (in theory).”

They have a side route. The trick they use is to try to make a 1-1 copy first, maybe several. Then, they have a reference for the untampered with original. Leads to an idea.

Perhaps, the forensics guys could copy the encrypted info first. Then, they can jailbreak the phone and crack it. From there, they simply need to re-encrypt the data they extract on the phone with THAT key & compare it to the original encrypted data to show correspondence.

(Of course, this all depends on how the encrypted data is accessed before jailbreaking & to what extent. The method might be impossible. However, it might work & I could see this method being applicable to other types of systems, like TrueCrypt PC’s.)

Wael August 24, 2012 11:51 AM

@ Nick P, @ Clive Robinson

The trick they use is to try to make a 1-1 copy first, maybe several. Then, they have a reference for the untampered with original. Leads to an idea.

The “right” way of doing that is to clone the original phone. You can then do all your testing and poking around on the clone. That clone maybe an identical device (serial number, IMEI, etc…) or it can be an emulator running on more powerful hardware. The clone can be also a HW development board that exposes the needed connections and JTAG functionality. At any rate, the original phone is never tampered with, unless that is needed for a “beyond a shadow of a doubt” proof.

Clive Robinson August 24, 2012 12:46 PM

@ Nick P, Wael,

The “right” way of doing that is to clone the original phone

The question is can you clone the important bits such as the partial but “secret” encryption key stored in hardware?

If not then your flash HD clone will be no closer to being unlocked than it was before.

Now I don’t “know” the internal details of the iPhone as I’ve not investigated it (as I don’t want Apples legal bods on my back) so I can only go on what others have said. And many have said that the key is “unavailable” as it’s based on a secret “stored in hardware” that is then oneway hashed with the PIN to produce the encryption key.

If this is true (and I have my doubts) then the guessing will have to be done on the original phone.

Wael August 24, 2012 1:01 PM

@ Clive Robinson, @ Nick P

The question is can you clone the important bits such as the partial but “secret” encryption key stored in hardware?

True. The question then becomes: What is the nature of that secret? and is it a global secret (same secret for all models, or a secret based on unique identifier of the device, that can be calculated from say a device serial number)…

I will heed your words and not dig deeper so I dont get Apple to …

Wael August 24, 2012 1:06 PM

@ Clive Robinson

If not then your flash HD clone will be no closer to being unlocked than it was before.

This is not entirely true though. When you take an image away from its “intended working environment” you can bypass some security mechanisms, including dictionary attack thrashing and other protection mechanisms…

Clive Robinson August 24, 2012 11:50 PM

@ Wael,

This is not entirely true though. When you take an image away from its “intended working environment” you can bypass some security mechanisms, including dictionary attack thrashing and other protection mechanisms…

In the case of Full Disk Encryption with AES and the AES key being derived from a secret kept in a seperate hardware chip that acts as the Inline Media Encryptor (IME) where the secret is hashed some way with a passcode you are not getting any advantages.

You are either going to have to brut force the AES key which (in a proper setup) is currently deemed impractical, or work out how to get the secret out of the IME Chip to do a brut force search on the passcode space in other hardware.

So if you cannot get at the secret in the IME chip then you only option is brut forcing the AES encrypted drive.

The question then is how do you get at the secret, unless there is an inbuilt back door to read it out or you can load software into ram to read it out, then you are going to have to tamper with the evidence in some way.

Nick P August 25, 2012 2:30 AM

@ Wael and Clive

“This is not entirely true though. When you take an image away from its “intended working environment” you can bypass some security mechanisms, including dictionary attack thrashing and other protection mechanisms…”

Now you guys are thinking practically [enough]. 😉

“AES key being derived from a secret kept in a seperate hardware chip that acts as the Inline Media Encryptor (IME) where the secret is hashed some way with a passcode you are not getting any advantages.”

You’re talking like it’s the NSA’s IME. It’s Apple’s. Have you seen their security implementation track record? I figure the guys working on this stuff are a few grades above the Mac OS X & iOS coders, yet I’m not convinced ahead of time it’s extremely well designed. Let them prove that.

Meanwhile, we know Apple is an ARM licensee, presumably putting differentiators in their own SOC’s. One might be accelerated, inline encryption with onboard key. Two things: we don’t know how that’s accomplished (might be insecurely) & nobody mentioned a tamper-resistant Infineon-style crypto processor.

In all likelihood, the chip might be attacked and cryptosystem reverse engineered like Ross Anderson’s people kept doing to smart chips. RobertT posted a company in the past that does that to chips in general. There might be even easier ways to coax the key out via hardware or trick the chip into decrypting. Need expert exploration before I’d give it secure IME status.

Also, maybe some merit in TLA’s or LEO’s to try to force Apple to help them make the extraction tool. Court order, national security letter, use of existing federal laws/regs, etc. Even if they only get the design, they might be able to identify a weakness & build an extraction tool. I think the lack of tools to crack the crypto of an iDevice is more indicative of lack of expertise applied to it.

It’s not what most forensic guys & their tools are used to dealing with. It’s higher hanging fruit. It won’t be point and click. Hard work must be done first. Then, everyone else will have an easier job at it. Unless, the security is really THAT good. 😉

Wael August 25, 2012 3:30 AM

@ Nick P @ Clive Robinson

Strange. I agree with both of you. I must be sick or something 😉

I was trying to hint at implementation details allowing “adversaries” to extract information without tampering with the device. Not tampering means not changing the state of the device. If there is a Database of device keys and serial numbers (a not so big “if”), then there is no need for tampering. One way to properly implement a HW protected key is to generate that key when the user (owner) provisions the device. The TPM for example generates what is known as an SRK (storage root key, which is a root for all key hierarchies underneath it) when the user “takes ownership” of the chip. This key is not provisioned or escrowed at the factory. Therefore, theoretically, no one has a database that contains the SRK with a mapping to a platform ID. Adversaries will have to tamper with the TPM. If they choose to copy the image and try to decipher it on a different platform, then what Clive Robinson talked about holds true. Keep in mind that the TPM and the platform that it resides on have several measures of defense. Also, resetting the TPM to factory default deletes the SRK. Resetting can take place when tampering is detected. I saw a utube the other day with a guy claiming to have broken the TPM (version 1.1) by zeroing the PCRs (platform configuration registers). I smliled a bit 😉

Short story, it depends on the implementation. And just because someone says, it’s too hard for an organization to break, means little. if I were an adversary and was able to extract (break, breach, etc.) the encryption key (or algorithm or system) of my enemy, I would make sure to let the enemy know indirectly that their system is too secure. I would send a message knowing it will be intercepted saying something like “dang! These guys made it too hard, it will take us years to break it). Of course, the reverse is also true; if I can’t break it, I will claim I did 😉

Clive Robinson August 25, 2012 5:12 AM

@ Nick P,

I figure the guys working on this stuff are a few grades above the Mac OS X & iOS coders, yet I’m not convinced ahead of time it’s extremely well designed. Let them prove that

We are both in agreement here and as I’ve already said personaly I believe it is backdoored in some way due to CALEA requirments etc.

The problem is I don’t know enough about the design. Apple is keeping schtum on the important bits. Thus all I have to work with is information about the design as described by others and my own experiance. As others have described it the design is the equivalent of a simple IME on the external Data Bus of the CPU such that all data going to off chip storage is encrypted.

Which to be quite honest is at a low level a very awkard design due to having seperate address spaces for I/O to devices etc for control purposes so I reserve judgment on the description. It would at this very low level also make it a likely contender for security bypass etc.

But back to the point of LEO’s and data at rest and what is legislated currently. CALEA only applies to active communications from the device, not what is stored on it. I’m also aware that Apple may have “received advice” on the design as it’s the phone of choice for frontline troops on active service [1]. And as we know the NSA took a “crackberry” and did some work on it to make the “Obhamaberry” which is reputed to be secure enough for Presedential use…

The question though was not about what access a TLA or equivelent might have at any time or by an LEO when the phone is in it’s active state but, “What access does an LEO have to the data at rest?”

Which is a very limited and limiting case for LEO’s currently because they “cannot tamper with evidence” as it alows it to be excluded and the chances are the case goes with it.

So they either need the phone to have already been tampered with (jailbroken) that makes it vulnerable or some equivalent (backdoor) route to get to repeatedly try passcode guessing [2]. Also with regards the passcode we don’t know if it’s been “length limited” in some way [3]

One of my concerns as I’ve already indicated is the real quality of the entropy or guessability of the “embedded secret” [4] and if Apple keeps a record by serial number etc [5].

For instance lets say a product manufacturer needs to produce a lot of factory embeded secrets, true entropy is going to be a real issue for many reasons.

However faux or determanistic entropy is fairly easy you just use AES in CTR mode or if you are realy cheap, use the product serial number and just AES encrypt it with a “secret key”. Either way the output appears to have good entropy, but the reality is it’s easily calculated upon request without having to reveal the “secret key”.

To little is known about how the Apple embedded secret is generated to say what the security of it actually is.

But getting back to the question of LEO’s getting at data at rest without tampering, the solution if no access is seen as a problem by the legislature will be further bad legislation. There are three possible out comes to this,

1, Relaxation of evidentiary rules and laws.
2, Mandating a LEAF or equivalent.
3, Legislation to force “compulsion”.

The cheapest option for the Gov is 2, the favourite for the US private prison profitiers is 3 and 1 will inevitably spill over into other areas of evidence.

As 1 is a currrent favourit attack by various LEO’s and the judiciary appear happy to play along and 2 is problematical (Clipper Chip, CALEA debacles) and the UK got RIPA through my guess is they are going to go after 3…

[1] As I’ve said before Apple are opening a new manufacturing facility in Texas and I’ve heard rumours it’s going to eventually be “silicon up”.

[2] The fact that passcode guessing is possible tells us Apple has not done the IME design correctly already so my confidence in the design is not high.

[3] Passcodes, Passwords and Keys have been known to get either “truncated” or partialy zeroed by “implementation mistakes” due to API’s etc.

[4] History tells us that poor entropy is a major failing of many security products from Netscape all the way through to current routers generating PK certs…

[5] We know from the RSA debacle that keeping secret seeds is not unknown and may be standard practice for some US companies.

RobertT August 25, 2012 5:48 PM

Hmmm, from reading between the lines Apple has some secret key stored on the chip which forms part of the crypto process.

To store device unique data on each chip you typically need one of the following
1) Fuse (Typically either Metal or Poly)
3) Flash

Unfortunately each of the data storage means has tell tale associated hardware close by. Fuse needs VERY wide supply metal to guarantee that the fuses blow cleanly (otherwise you risk regrowth)

EEPROM typically requires double Poly (very unusual = easy to spot) and a high voltage generator with a thick gate oxide for the switches. Thin gate oxides cannot be used in EEPROM cells because they result in charge leakage by direct band tunneling.

Flash: A typically Flash cell works because a combination of high Gate voltage and high Drain region voltage stress causes an offset to develop due to processes called “Hot electron Injection”.

These are the three typical on chip data storage mechanisms.

If Apple wanted to get really tricky, they might choose to use some other “on chip” offsets mechanisms and simply NEVER reveal what the storage mechanism is (I think we call that security by obscurity) BTW it only works IF you remain obscure. It is impossible to be a dominant consumer product and hope for security by obscurity. The other problem is that nobody really understands the long term changes that might occur for obscure offset mechanisms. Typically today there is an expectation of a 10 year life for the product, so forgetting the encryption key after 1 year would not make for a popular product. Meaning they need to stay with well studied / understood offsets.

OK so form the above: I can say with reasonable certainty that Apple uses some variant of a Flash cell to store the encryption key (IF it is stored on the application processor SOC)

We know that Samsung is Apple’s Fab so if we obtain Samsung’s Reference Flash cell layout (for the process that Apple uses), then we know exactly what pattern we are looking for. (Generally flash cells are Hard Macros meaning they must be used AS IS, without any layout modifications).

Flash cells can be identified by buying any Apple iPhone and decapping the application processor. (takes about 30 min) Now you want to polish back the top layers and get down to Metal 2 or maybe Matal 3. At this point the specialty macro cells will be easy to identify with any normal optical inspection microscope.

Once we identify all the flash cells used on the SOC we only need to work our way through each of the sections and establish what use it likely has. (IF you have sufficient reference samples you identify Flash cell function by destroying 1 bit of the cell and see what changes, can be done with a Laser or FIB

BTW the changes might be very subtle (because the flash cell might do something like offset trim correction for an on chip ADC)

Typically Flash blocks have a high overhead (a lot of associated circuits charge pups etc) so you usually find that many of the flash functions are combined into a Flash block. Typically this block will have a security Lock bit which, if set, prevents re-flashing. This is the Bit we are looking for, because this is the bit we need to undo. Once undone we typically have access to the Chip’s Test Scan path (which includes mirror cells of the flash bits).

Given the above, IMHO, Apple has ZERO chance to successfully hide data on an SOC chip.

There is a lot of talk these days about on chip PUF’s (Physically Unclonable Functions), So far I have examined dozens of PUF cells, but have never seen a PUF cell where its hidden information state could not be discovered. (Usually whats revealed is how little the PUF design engineer really understands the physics of the PUF cell information storage) (As Bruce often says about Crypto, it is easy to write anyone to write a crypto algorithm that they cant break, but that does not mean it is secure)

The most difficult PUF’s to decode, are those which are the most unreliable, which is to say they have very little design “noise margin”. Some PUF’s even suggest that the lack of design noise margin is their feature. Unfortunately this means that the PUF block effectively has Error correction built into the Key generator storage, think about what that means for security? (the raw key might be128 bits, but because of “noise margin” we need a system whereby any 10 of the 128 bits can be wrong) HMMMM

Dan Smith August 25, 2012 10:50 PM

I see comments here about LEO ‘squickly trying many possible combinations on Ipod and ipads, as a way of defeating the “10 tries and the ipad gets wiped” security.

Or am I misreading this? Does this mean all 9999 combinations can be tried on a locked ipad that is set to wipe after 10 tries?

Clive Robinson August 26, 2012 5:44 AM

@ Dan Smith,

Or am I misreading this? Does this mean all 9999 combinations can be tried on a locked ipad that i set to wipe after 10 tries

Yes and no.

No if you are trying from the standard input.

From what has been said (by others) it would appear that the count / wipe function is high up the software stack. If you can get an application onto the phone you can get around this.

The exact mechanism I don’t know it could be as simple as clearing the failed attempt counter or more complicatedly work with the APIs further down the stack thereby bypassing the counting mechanism altogether.

If you “own the CPU” then there are multiple ways to do this sort of thing.

And to be honest I’m not sure that Apple can stop the embbeded secret being read out.

If the secret can be read out by a root privileged app then there is no reason to actually do the passcode cracking on the iPhone with it’s comparativly limited power SoC CPU. You’ld put it up on a system based around GPU’s or FPGA’s or rent cloud time and do it in a very short period of time even with telephone number length passcodes.

RobertT August 26, 2012 5:38 PM

@Dan Smith
One other thing to remember, is that it takes both time and energy to erase the flash. If the LEO has the iPhone than they could stop the charge circuit from functioning, typically this is a “charge pump” that uses an external storage cap for the High Voltage needed to erase. if you put a suitable sized resistor in parallel across this cap than the voltage will never increase and the erase will never happen. Depending on how the application code is written this could be completely transparent to the erase code. So the system thinks this erase happened but it never did. Many coders will just call a subroutine that starts the hardware erase (i.e. charge pump and Reset), they wont actually check that the required high voltage was attained.

The hardware modification that I’m talking about is probably beyond the scope of most LEO’s (because they need to find the right nodes to make the modifications) but it is considered a simple hack for any of the big three TLA’s.

Over the years I’ve seen plenty of variants on this hack such as
– adding a series resistor to the HV cap (means the desired voltage is reached BUT the current that can be delivered is insufficient to Erase

  • changing and external voltage divider used to sense the HV voltage. (generally you don’t want voltages on chip that are higher than the Battery supply (typically 4.2V for a phone) To keep the HV sense logic off chip two PCB mounted resistors are used. if one of these is changed than the sense circuit detects correct HV but the actual voltage is too low to Erase, so everything proceeds in the software as if the erase has happened.

-Spiking the HV node high to flip a comparator (usually requires a resistor be added to the storage cap)

-ESD discharge onto the cap to blow up the sense circuit

Bypassing Reset circuits is a well studied area…

Clive Robinson August 27, 2012 2:16 AM

@ Dan Smith,

Some of the hardware hacks RobertT mentioned are very old and were known about back in the early days of “Cable Settop Boxes” when all memeory (apart from registers) where external to the CPU.

If you think back just a little while BIOS security in your PC relied on “battery backed RAM” simply taking the battery out, or pulling the appropriate pin header link on the Motherboard did the equivalent.

These sorts of attack are “physical” not “software” and work from the bottom of the security stack up (software is top down).

Are there ways to protect against this sort of bottom up physical attack? the simple answer is “If tampering is allowed ultimatly no”.

Hence my comments in the post above about constraints on LEO’s with “evidence tampering”. Obviously extra judicial TLA’s and well financed NGO’s such as corporations/criminals/individuals are not constrained by the judicial process unless for some reason they wish to be.

So the question is how do you “raise the bar” on physical attacks. Well the way you go about making things more dificult for the adversary is by the age old process of “encapsulation”, as seen with “strong boxes” and “safes”.

Basicaly you work on surrounding the sensitive circuits with some kind of physical barrier be it a case with specialised screws as seen on set top boxes through various metal cases that are spot welded etc. As these cases are relativly weak defencive measures modern methods usually involve various types of encapsulating plastic that set very hard, are very strong, and resistant to most types of disolving agents.

However there are several downsides,

Firstly and most importantly is it makes the manufacturing process extreamly complicated and very much more expensive. This is not just from the plant/equipment and process side, but also on the “rework” side as well. And it also adds considerable delays in the manufacturing process while the plastics set/cool/cure.

The second downside is that whilst providing quite a difficult challenge to the majority of the law abiding population as a single measure it does not add significantly to the actual security because most plastics can be fairly easily machined in some way by mechanical cutting / grinding / drilling, or by the application of heat in various forms or disolving in some quite lethal chemicals.

Thus other secondary precautions have to be made, this is usually by adding or “loading” the plastic with various particals to make mechanical machining of the plastic difficult. Thus various fillers such as quartz, carbide, carborundum and even diamond dust to quickly ware down cutting edges and soft metal dusts to clog up and bind ebrasives and serated cutting tools. Other fillers can be used to limit the effects of chemicals and crude thermal attacks.

However even these measures have their failings. For instance thermal energy can be supplied in vary precise and controled ways by a laser or even refined forms of arc/plasma cutters that provide very localised vapourisation of material without causing the bulk temprature of the plastic to rise by more than a fraction of a degree.

The partial solution to this is “active prevention”. That is the circuits remain powered and rely on disruption or detection.

The oldest form of this is to use battery backed RAM and make the supply to the RAM very vulnerable. One method is to use “hair fine” enameled wire and randomly coil it over the circuit before the plastic is added. Thus any cutting attack stands a reasonable probability of cutting the wire before the circuits under attack are exposed.

But as we know RAM can have it’s contents “frozen in place” by the use of liquid nitrogen etc so “in theory”, the unit could be cooled down and a laser used to burn down to the power connection on the RAM chip and a the hair fine wire bypassed.

All such detectors can in theory can be bypassed in similar ways all the attacker has to have is a good knowledge of how the system works and spend time practicing and refining their techniques. And as the iPhone is a.

Clive Robinson August 27, 2012 3:16 AM

A curse on this smart phone and it’s dodgy habits with the keyboard driver and how it works with Auto compleate etc 🙁

@ Dan Smith,

In my above post, the last sentance,

“And as the iPhone is a.”

Got mangled so to finissh it and the rest of what I was going to say,

And as the iPhone is a consumer priced item getting sufficient of them to perfect the technique would not be overly difficult or expensive.

This means that other techniques need to be used to prevent “freezing” orr other such attacks.

This means moving from “active protection” to “dynamic protection” which means splitting the secret down into many parts and stiring them around in such a way that if the RAM is frozen what the attacker recovers is incompleate in such a way as to render it usless.

There are various ways to do this but first you have to understand the idea of “data shadows”. The simplest way to explain this is that it works by the difference between two or more values held in the memory. For instance 0xE7 is binary 11100111 if you XOR if with say 0x3F (00111111) you get 0xD8 (11011000). You only need to store two of the values to be able to calculate the third. Thus if 0xE7 is the secret you can store 0x3F and 0xD8 in memory. That is 0xE7 is a data shadow of 0x3F and 0xD8. You can thus make one or both these numbers data shadows as well.

On it’s own the data shadows don’t add greatly to security, however moving the numbers that make them around continuously in memory means that unless the attacker can find the pointers to work out which numbers make the final data shadows. If the attacker cannot find out the value of the data shadow then it’s game over.

Arguably if you use data shadows correctly then you don’t need to go about the process of encapsulating the RAM in the first place…

This is because “dynamic protection” has similarities to cryptography, in that it uses information to lock up information in a safe far more secure than a physical safe. However it differs to cryptography because it is dynamic and is always changing therefore the data is never at rest. Thus great care has to be excercised in implementing it because at the end of the day a small mistake can leak the secret out in some way (such as a time based side channel etc).

RobertT August 27, 2012 7:43 PM

@Clive Robinson
“This means moving from “active protection” to “dynamic protection” ”

It is interesting that you mention this (dynamic key protection) because I recently studied an example of this used for on chip data protection. The basis of the data scrambling was a form of encryption using Ring Homomorpism along with both a software and hardware implementation of a Lorenz Attractor (Chaos circuit).

I don’t mind admitting that it had me fooled for a long time!

When you probe the RAM data, it always looks completely random, than you look at both ALU and RAM data together and see that it appears that another random number was added before each ALU operation, confusing to say the least.

BTW this scheme has many of the properties that you talk about WRT hypervisiors because the Hardware circuit must remain in synch with the software or the data gets completely scrambled.

Clive Robinson August 28, 2012 6:10 AM

@ RobertT,

I don’t mind admitting that it had me fooled for a long time.

I think it would have given me an almost permanent migraine, and I guess that’s what it was designed to do 😉

It would be interesting to see what the security margins are on it, especialy in a widely available consumer product.

Mike November 5, 2012 10:46 PM

Its seems RIM has given LE a backdoor into blackberry to bruteforce the key container. Something elcomsoft was not able to achieve.

Tom February 27, 2013 9:24 PM

Simson Garfinkel’s article is just naive and poorly researched.

Garfinkel claims law enforcement can’t scarf smartphone data? Simply untrue.

In word: Cellbrite ( )

For low 5 figure costs, gov’t agencies can acquire a unit that will crack and download everything on any iPhone right up to iPhone5’s and all Android units.

The Michigan police who were scooping up smartphone data at routine traffic stops were using Cellbrite UFED units.

Also, if they don’t want to spend a few $10k’s the agency can mail the iPhone off to Apple with a copy of the warrant or subpoena, and Apple technicans will happily put the entire contents of the phone on a set of DVDs and mail everything back to the agency. They can do this whether there’s a screeenlock password, complete unit encryptation, or whatever. iPhans should be very concerned about some hackers ever obtaining Apple’s back door.

Cellbrite units can also do various windows mobile flavors, and legacy systems like Symbian & the old Nokia systems.

There’s only one exception. In their user manuals Cellbrite explains that they can only obtain the data from Blackberrys, of any vintage, if there is no password protection and/or no encryption.

There you have it guys! There’s only one phone that can keep the snoops with UFED units out of your data: a Blackberry where you turn on encryption and set a strong password,

Wael February 27, 2013 10:12 PM


From their web site (talking about Android):
“Physical extraction for any locked device is only available if the USB debugging has been switched on”

accutane March 21, 2013 12:26 PM

Wonderful article! This is the kind of information that are
meant to be shared across the net. Shame on the search engines for no longer positioning this post upper!
Come on over and discuss with my website . Thank you =)

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.