Apple's iOS 4 Hardware Encryption Cracked

All I know is what’s in these two blog posts from Elcomsoft. Note that they didn’t break AES-256; they figured out how to extract the keys from the hardware (iPhones, iPads). The company “will be releasing the product implementing this functionality for the exclusive use of law enforcement, forensic and intelligence agencies.”

Posted on May 27, 2011 at 6:04 AM59 Comments

Comments

John May 27, 2011 6:18 AM

I see that many of their products are available to the general public and they are originally a Russian company.

Pedro Melo May 27, 2011 6:31 AM

Interesting, but I don’t see any mention on the amount of time to obtain the keys, only how much it takes to brute-force the passcode and that after you have them deciphering the information is instantaneous.

Mark Nunnikhoven May 27, 2011 7:17 AM

Their product works against some stronger passwords as well. It runs a brute force startin w/0000–9999 and then moves through a dictionary and combination data set (eg, password1).

Speed depends on the hardware and edition of the software. Pro version supports GPU acceleration.

2 additional notes:

  • the product works on backups obtained from a workstation and may work against a forensic image (haven’t tested the image)
  • the product also pulls the data highlighted in the Fraunhofer password study from Feb/11

Mark
@marknca

BF Skinner May 27, 2011 7:26 AM

I know what law enforcement and intelligence agencies are. But what’s a “forensic” agency? Forensic specialists in some states are regulated but not all.

But here again we see a company selling rooting to Gov’t/LEO/IC. I couldn’t find it is this another case like our French friends who will NOT disclose their method to Apple? They didn’t say they wouldn’t but I can’t believe their business model allows for disclosure.

Andrey Belenko May 27, 2011 7:28 AM

@all: it’s not about passcode bruteforce at all. You can decrypt most of the files without knowing the passcode.

GPU is irrelevant since bruteforce (should you decided to do it) must be run on the device so you simply can’t use GPU.

@Pedro Melo: key extraction is instant, passcode bruteforce (in rare cases in is really needed) is slow – about 30-40 mins for all 4-digits passcodes, significantly more if passcode is longer/alphanumeric.

Chris May 27, 2011 7:54 AM

I was waiting for Bruce to weigh in on this. Here’s the rub that I have yet to determine:

Since PIN authentication must be performed on-device, does the “Wipe after 10 attempts” setting essentially render this “recovery software” useless for getting keychain items (and thus data) secured by kSecAttrAccessibleWhenUnlocked?

Neil May 27, 2011 8:23 AM

There’s a response from Elcom in the comments section on their first blog post that says they are able to bypass the portion of the API that counts passcode attempts. So, they are free to make as many attempts against the passcode as they like without wiping the device.

Based on how long they claim it takes them to crack a 4-digit passocde, you can estimate the number of attempts per minute (I think about 250). You would in theory be able to determine an acceptable attack period and work backwards to a minimum complexity and length for the passcode.

Muzaffar Mahkamov May 27, 2011 8:46 AM

@Chris: ElcomSoft replies that they don’t use the API for the bruteforce attack, hence the attempts are not counted (http://blog.crackpassword.com/2011/05/elcomsoft-breaks-iphone-encryption-offers-forensic-access-to-file-system-dumps/comment-page-1/#comment-23319)

I’m surprised that bruteforce protection is not implemented in the security processor level.

What’s more disturbing is the amount of data recorded by the iPhone. “screen shots of applications being used”?

What’s the purpose of gathering & storing all that information if Apple claims it’s not transmitted to the Apple servers?

Clive Robinson May 27, 2011 8:47 AM

Hmm, why am I not surprised.

Not being nasty but Apple for the past twenty years has been about style over substance.

They have had numerous security issues with their mobile devices, including shipping product with a PC virus on it.

Has this stopped people buying their product?

No, much like Apple many of their users are style over substance not juszt in their outlook but in their entire life (as those who have worked with them find out).

Chris May 27, 2011 9:05 AM

@Neil, @Muzaffar

Thanks for the clarification/link on the PIN code wipe API bypass Elcomsoft is using. I’d love to see more details, but they’re not likely to reveal their secret sauce. Theorycrafting below.

If they’re bypassing the API, Apple likely hasn’t signed whatever code is necessary to do that. So this likely requires a jailbreak to run (tethered, even) – what happens if iOS (N+1) is jailbreak-proof? Also, wonder if a microcode update is possible to the SoC that would close that hole? Either way, I’m sure Apple has bought a license to this software and is working on how to make it useless.

@Muzaffar re: screenshots
Apple takes the screenshot of the running app as part of the iOS task switching – that way when you switch back to the app in question, it shows the zoom-in animation to the static snapshot while the app itself is being relaunched in the background.

@all re: protecting against this
Looks like a nice fat complex password is the safest bet. At a rate of ~250 attempts/minute bruteforce, eight characters in the 72-character Aa1! keyspace should be enough that the device suffers a manufacturing defect before being broken. 😉

Mark Nunnikhoven May 27, 2011 9:19 AM

@all – re:capabilities
One thing to point out to make sure everyone is on the same page, this tool works on device BACKUPS and not the device itself.

http://www.elcomsoft.com/help/eppb/index.html > Program Information > System Requirements

and

http://www.elcomsoft.com/phone_password_recovery.html > question #2

@all – re:brute force
In my testing, the tool caught 4-digit PIN and simple passwords in a reasonable amount of time (~4 hrs running in a non-accelerated VM) but was unable to break a complex–but easy to remember–multi-word password. When run overnight on the complex password, the application displayed a message stating that it was unable to find the password after running through all combinations it knew about.

Neil May 27, 2011 9:26 AM

The screen captures cached on the device are used to make the fancy animated context shifts when a screen is twisted, flipped, or slides. It’s not nefarious, although it does seem sloppy from a device security standpoint.

Chris May 27, 2011 9:39 AM

@Mark: “One thing to point out to make sure everyone is on the same page, this tool works on device BACKUPS and not the device itself.”

From the first link, Elcomsoft states:

“Ten thousand combinations do not sound like much. On a PC, breaking a passcode of this length would only take a few moments. Unfortunately, passcodes can only be bruteforced on the device itself.”

ATN May 27, 2011 9:40 AM

a complex–but easy to remember–multi-word password.

Ah, so you are not using a complexity checker for your password, which fails because the password is not a word!!! (Windows tools, no multi-word password)

Spaceman Spiff May 27, 2011 9:51 AM

Well, now that it is known that this attack is possible, who wants to wager how long before the black hats have it?

Chris May 27, 2011 10:01 AM

@Spiff – The only really new thing here is the ability to attack the PIN code without triggering the bruteforce protection; there’s been methods to grab an image of an iDevice for some time now IIRC.

not-an-iphone-or-android-user May 27, 2011 10:44 AM

@Muzaffar Mahkamov

What’s more disturbing is the amount of data recorded by the iPhone. “screen shots of applications being used”?

Perhaps that explains the “camera shutter” like sounds from my two friends iPhones every time they were done using them.

Or alternatively does the iPhone take a snapshot of the user at the end of each “session”?

Chris May 27, 2011 10:50 AM

@not-an-etc

The “camera shutter” is the “lock closing” noise to indicate the iPhone going into sleep mode.

Dirk Praet May 27, 2011 11:15 AM

So this is an attack on key management rather than an attack on encryption. Protecting a 256-bit key with a 4-digit pin does not give you 256 bits of cryptographic strength. I’d say it’s more like 13 bits: 10^4 pins = 2^13.3 pins, meaning a 4-digit pin makes for about 13 bits of cryptographic strength.

The increasing popularity of OS X an i-devices may well end up pushing Apple in the same corner where Microsoft has been for such a long time.

hacker May 27, 2011 11:44 AM

Everything elcomsoft can do is covered by the code and slides here: http://code.google.com/p/iphone-dataprotection/

To get the code on google to work requires some domain knowledge, but free and easy to use tools to brute force the passcode and extract needed keys to read acquired filesystem (dd) images will likely materialize.

But so will new hardware from apple along with iOS 5.

Pepijn May 27, 2011 12:43 PM

Elcomsoft is the company whose employee, Dmitry Sklyarov, was arrested and tried in the US for allegedly violating the DMCA. How ironic and disappointing that now they appear to work for the man…

Chris May 27, 2011 2:10 PM

@hacker

Well, I stand corrected then; it appears that all Elcomsoft has done is put that existing ability into a shiny package.

aikimark May 27, 2011 2:30 PM

@Andrey Belenko

Thank you for participating in the discussion and clarifying/correcting us. Congratulations to Elcomsoft on this crypto work.

NZ May 27, 2011 2:47 PM

The second post mentions three ways to access the file system, the 1st one being “One can ‘mount’ the device, mapping it as a drive letter and copy data file after file.” Does that mean using a tool like iPhoneBrowser? It seems so, since they “In “jailbroken” state, all information stored on the device may be available.”, that is afc2 service… errm, daemon gives full access. If so, then all this encryption could be bypassed long ago…

Andrey Belenko May 27, 2011 3:43 PM

@aikimark My pleasure. There is quite some misunderstanding of what’ve done (even here in comments), so I am trying to clarify things 🙂

@NZ Yes – in theory. That is called logical acquisition as opposed to (more thorough) physical acquisition. Also, the “afc2” method have considerable limitations: you can’t read files that are protected with passcode (not unless you know the passcode; with new tools you can use escrow keys instead of the passcode) and you have to deal with each file separately instead of just one filesystem image. Besides, I am not sure if non-invasive tethered jailbreak will be enough for this theoretical mode to work.

aikimark May 27, 2011 4:29 PM

@Andrey

Questions:
1. Once you’ve gathered all the keys, couldn’t you get better decryption performance working on a byte-by-byte copy of the hard drive and a more powerful emulator?

  1. How do you address the (US) courts insistence on forensic cyber work done on a read-only copy?

  2. Although Elcomsoft has come a long way toward establishing credibility in the business community, do you still encounter prejudice because you’re based in Russia? (lingering Soviet or mob fears)

tommy May 27, 2011 4:35 PM

At the risk of sounding like a confirmed Luddite, this writer has been saying for years that sensitive transactions, like banking, should not be done on “smart phones”. Surprising that anyone at this blog would do so, or that they’re surprised that this happened.

Personally, I’d rather not even have a sensitive conversation on any cell phone, since it’s so easy to eavesdrop. To monitor my home phone, you at least have to go to a little more trouble…

Richard Steven Hack May 27, 2011 9:38 PM

Chris: “what happens if iOS (N+1) is jailbreak-proof?”

Until they find out how to do that – which so far appears unlikely – we don’t know. I wouldn’t hold my breath waiting for Apple to outwit the best hackers in the world (short of sealing the entire device in Carbonite – and even Leia beat that one.)

As for the “Law Enforcement” edition leaking, it’s only a matter of time. It may be hard to find but someone will get their hands on it outside of LE. I think most of Elcomsoft’s stuff is available illegally somewhere if you look around.

Richard Steven Hack May 27, 2011 10:08 PM

Offtopic but relevant since it implies RSA SecurID compromise:

Hackers Broke Into Lockheed-Martin Networks: Source
http://www.msnbc.msn.com/id/43199200/ns/technology_and_science-security/

Quotes

“They breached security systems designed to keep out intruders by creating duplicates to “SecurID” electronic keys from EMC Corp’s RSA security division, said the person who was not authorized to publicly discuss the matter….The hackers learned how to copy the security keys with data stolen from RSA during a sophisticated attack that EMC disclosed in March, according to the source…Rick Moy, president of NSS Labs, an information security company, said the original attack on RSA was likely targeted at its customers, including military, financial, governmental and other organizations with critical intellectual property.

He said the initial RSA attack was followed by malware and phishing campaigns seeking specific data that would link tokens to end-users, which meant the current attacks may have been carried out by the same hackers.

“Given the military targets, and that millions of compromised keys are in circulation, this is not over,” he said.

EMC disclosed in March that hackers had broken into its network and stolen some information related to its SecurIDs. It said the information could potentially be used to reduce the effectiveness of those devices in securing customer networks.

EMC said it worked with the Department of Homeland Security to publish a note on the March attack, providing Web addresses to help firms identify where the attack might have come from.

It briefed individual customers on how to secure their systems. In a bid to ensure secrecy, the company required them to sign nondisclosure agreements promising not to discuss the advice that it provided in those sessions, according to two people familiar with the briefings. [In other words, “security through obscurity” – MY NOTE]

End Quotes

Fun quote: “Security experts say that it is virtually impossible for any company or government agency to build a security network that hackers will be unable to penetrate.”

i.e., “There is no security. Suck it up.” 🙂

These companies are precisely the ones I’d target if I was a hacker looking for some really juicy data to sell on the open market, say, to China, other Asian countries, the EU, Israel.

An iOS Dev May 28, 2011 12:59 AM

Re ‘Screenshots’ being taken by iOS:
True. iOS does that so that switching between Apps looks more fluent. Apple specifically points that out in their developer docs and warns devs to clear sensitive info from the display before an App relinquishes control.

Andrey Belenko May 28, 2011 1:38 AM

@ aikimark

  1. We actually decrypt image (byte-by-byte) offline, on a PC. The on-line (on the device) phase includes extracting the keys (few seconds), obtaining filesystem image (if you don’t already have one; takes about 1 hour 30 minutes for 32 Gb iPhone, files in resulting image are encrypted), and, possibly, passcode bruteforce. Passcode bruteforce can’t be offline because it uses UID key which is currently undetachable from the device. Once you’ve got extracted keys and image you go to PC, and decrypt the image.
  2. We try to address this by providing two-step process: obtain physical (encrypted) image and the decrypt it. This way the steps are reproducible. It is possible that vendors of mobile forensics toolkits will provide ability of on-the fly-decryption one day – and this will effectively eliminate the need of producing decrypted image and make whole process compliant to US regulations.
  3. I do see some prejudice in press sometimes, but I’d say that things are much better than, say, 3 years ago. I can’t tell whether this affect sales or not because I’m not involved in sales.

@ Chris: “what happens if iOS (N+1) is jailbreak-proof?”

This particular method relies on a jailbreak which can’t be fixed by iOS upgrade, so it will work for supported devices with any iOS version. The real thing is that Apple might well change data protection/encryption mechanisms in iOS 5 and this will require additional research.

Muzaffar Mahkamov May 28, 2011 2:16 AM

@Dirk Praet:
I guess the security processor is most likely using some key derivation algorithm on the PIN before using it for key encryption (e.g. mixing the PIN with a secret salt and hashing with SHA-256 in multiple rounds, giving 256-bit key at the output). That’s why the keys can be bruteforced on-chip only. Since you cannot obtain the secret salt without cracking the hardware, which is much more expensive & cannot be automated, the resulting key encryption key is reasonably 256-bit strong.
Ideally, the encryption keys should be stored in a (secure) crypto processor which never reveals them, even to the OS. Then the PIN can be used only for authentication to the security processor. The data encryption/decryption would be done ‘on the chip’.
But again, the security processor has to enforce a policy on the number of PIN attempts, otherwise everything else is useless.

Clive Robinson May 28, 2011 6:15 AM

@ Muzaffar Mahkamov,

“Ideally, the encryption keys should be stored in a(secure) crypto processor which never reveals them, even to the OS”

Or as importantly the hardware.

There is a “Catch-22” situation with a lot of hardware which is verification of functionality during the manufacturing process. You have to be able to “excercise all functionality” as part of the test process.

This also applies to the chips etc., so if this test functionality is not built correctly it can leak information in one way or another alowing recovery of “hidden data” (Some Intel devices have been shown to suffer from this issue in the past).

As far as I’m aware there are no consumer grade secure CPU’s manufactured in places where the actual chip designs can be guaranteed not to be altered by others in some way prior to mainline production.

Further even when the chips are supposadly guarenteed to be “as designed” and payed for out of Mil budgets, the Chinese (for one) appear to be able to still get at data supposadly beyond recovery (as supposadly shown from the downed US aircraft).

So… If you have access to the actual device then it’s a reasonable chance it’s game over, irespective of the OS or other software.

Gabriel May 28, 2011 7:48 AM

@Richard: I believe at the point she freed Solo, Leia would have been an insider threat. A mole who got in due to a lack of proper authentication and a certain crime lord’s affinity for ruthless scum like himself (by proving yourself willing to use a thermal detonator)

I thought the problem with locking down a phone, game console, and any other devices is that they either use software locks that can be exploited or they use hardware based encryption, which don’t have the robust key management required to thwart a determined attacker. Usually this is due to cost.

Dirk Praet May 28, 2011 11:29 AM

@ Muzaffar Mahkamov

“e.g. mixing the PIN with a secret salt and hashing with SHA-256 in multiple rounds, giving 256-bit key at the output”

Good point. Ideally, yes, but since I’m not familiar with the specs of the security processor, I’m not taking that for granted. If there’s one thing I’ve learned in this business, then it’s that assumption is often the mother of all f*ck-ups. Some recent breaches have learned that even so-called security firms like HBGary didn’t bother to salt the password hashes in their database.

Richard Steven Hack May 28, 2011 10:07 PM

The Lockheed story is all over the news. I found this comment from PC Magazine funny as well as probably stupid:

“Classified information is likely out of hackers’ hands: Due to the volume of attacks that these kinds of systems on a daily basis, it’s highly doubtful that Lockheed—or any security contractor—would keep top-secret information within reach, should one ever breach the remote access gates.”

If PC Magazine thinks classified info is only kept on boxes unattached to the internal network (let alone the external Internet), they are being really naive. I have no doubt that classified info is regularly kept on machines attached to the internal network.

ANY breach IS a breach and has the potential to be exploited beyond the initial breach location. That should be obvious. That it’s not always the case does not change that fact.

This Lockheed spokesman says: “we remain confident in the integrity of our robust, multilayered information systems security.”

And that remark is just stupid PR. ANY security person who “remains confident” of his security is by definition insecure.

And this: “According to a source, once Lockheed was made aware of the attack, the company began instigating new security measures to prevent future breaches. These included shutting down some of the company’s remote access capabilities on its systems, as well as a new order for 90,000 replacement SecurID tokens for the company’s employees. Users were also asked to change their passwords company-wide.”

Wow – employees had to change their passwords – which probably means they weren’t regularly changing them anyway. Way to inspire “confidence”.

Richard Steven Hack May 28, 2011 10:19 PM

This is funny. Found this comment over at BoingBoing:

Quote

jackbird

I’ve done some on-site contract work for LMCO, and their IT is extremely tightly locked-down (and this is for non-clearance-requiring stuff).

So much so that employees have trouble doing things like provisioning servers.

So they end up running test servers off their home internet connections and other foolishness.

End Quote

Great. Sheer genius. Lockheed employees run their TEST SERVERS off their HOME networks.

Tell me again how “confident” they are about their security.

Note what this also proves yet again: that the more security you implement, the LESS secure your system becomes because your people start evading the security system to get work done.

Nick P May 28, 2011 10:48 PM

“Note what this also proves yet again: that the more security you implement, the LESS secure your system becomes because your people start evading the security system to get work done.”

It’s a recurring theme that looks unavoidable. It’s always about hitting the sweet spot between too little and too much.

RobertT May 29, 2011 8:41 AM

@ Muzaffar Mahkamov,

“Ideally, the encryption keys should be stored in a (secure) crypto processor which never reveals them, even to the OS. Then the PIN can be used only for authentication to the security processor. The data encryption/decryption would be done ‘on the chip’.”

I know a tiny bit about secure processor design, so let me say unequivocally, there is ABSOLUTELY no known way to store data on a chip that can not be recovered by a motivated well funded adversary.

The costs to rent the machine to help extract this information is about $1500/hr. It is conceivable that someone, with detailed knowledge of the secure processor layout, could complete this key recovery task in under 2 hours. It is also possible that a secure processor could be be opened-up have the stored keys extracted AND be put back into a system and still be completely functional. There might be some anti tamper “trip wires” to bypass, but most anti-tamper stuff is really just delays access, it has never been particularly problematic to bypass anti tamper systems.

So even in this extreme worst case, when a direct FIB attack on the Chip IC was required it could cost as little as $3000USD.

As Clive mentioned the normal approach to recovering data from a secure processor, is to find some way to re-enable a test mode circuit or repair a Lock-out-bit, these activities are trivial and cost just $100’s primarily in labor costs. Often the lock-out functions can be made to work at some over / under voltage condition. In these cases there is not even physical evidence that the chip was tampered with.

So before you trust your most valuable secrets to a “secure processor” I think you need to consider just how insecure even the best mil grade parts actually are.

The most important aspect of hiding a secret on an IC is protecting the database. So it is fundamentally security by obscurity, no one knows where to look for the answer. This means believing that the adversary has neither extracted the database himself (from a product he obtained) or procured a copy of the database from some willing lowly paid individual.

Just to empathizes this point, I have seen a situation where the person entrusted with managing the chip databases (and mask generation) was actually a high ranking employee of the Stasi. Now that was 20 years ago, but I’d say the that the desire to embedded your people into these key jobs has only increased over the last 20 years.
For me what is truly disturbing. is that this key individual, probably now lives in S-Korea, Taiwan or China.

Clive Robinson May 29, 2011 12:49 PM

@ Richard Steven Hack, Nick P,

“Note what this also proves yet again: that the more security you implement, the LESS secure your system becomes because your people star evading the security system to get work done.”

That appears to be the case, and as Nick P notes there might be a sweet spot. Or at some point the “see-saw” balances between “Usability -v- Security”.

However I’m going to go out on a bit of a limb here and say that it’s actually not true.

We know from other security activities that there appears to be atleast one other similar rule “Efficiency -v- Security”.

But I know from designing some systems that it’s not a generalised case, there are some situations where with the correct knowledge “Efficiency -v- Security” is not an issue. That is you can design efficient systems that are secure.

But the designs have not been generalised systems but designed for very specific functions.

This tends to make me think that it is actually the same issue with regards to “Usability -v- security”, as long as we aim for “generalised” we will not achive the level of security we either want or need.

So the next time a requirments spec crosses your desk, get the big red marker pen out and as a first step remove all the marketing kruft/crud and gut it back to the bare minimum/essentials.

I firmly belive that security is a quality issue and exactly the same methods that give us quality products will give us secure products. What we have to give up is the “jack of all trades” attitude where our systems have to be “all things to all men” in “every possible way”. Then maybe without all the baggage our systems might stand a chance to get off the ground.

asd May 29, 2011 2:49 PM

Does the chip need to be secure?. You could proable take of-the-shelve processor and run a program and data that is encrypted.
Even weakness in the chip wouldn’t effect the data.
Some people are trying to do this and say you could only do specfic tasks, but if you want to display text in a document viewer for etc, you could have a function that takes 16 bytes, and use xor/or/and to change that to one byt”e”.
There would be multiable ways the 16 bytes could produce that one byte “e”, were the xor/or/and function is genertated on running or random function.

asd May 29, 2011 3:27 PM

@Richard Steven Hack , The attack is proable a diversion. I would think if you going after a target like that you would get things setup so they would notice straight away, and if you can make them work to find something they will more likly be over-comfent it is the only one

asd May 29, 2011 3:45 PM

I bet there web server was pwned, and in say 1.5 months we will here a story that, the attackers had used a different door to access there internal network and are now dumping gigs of data 🙁

Clive Robinson May 29, 2011 4:16 PM

@ asd,

“Does the chip need to be secure?. You could proable take of-the-shelve processor and run a program and data that is encrypted.”

The idea is you are protecting a secret, it does not matter what the secret is, but you are trying to protect it. Usually the secret is data and it is kept by encrypting it against a key that in turn becomes the secret, thus you have not solved the problem of keeping a secret you have mearly moved it.

Currently if the hardware you use is owned at the chip internal circuit level then it means that in a single CPU system the secret must be known to the CPU therefore the circuitry that makes the chip owned can reveal the secret in some manner.

That is, with regards to encryption, obviously the program that goes into the ALU and programe control unit cannot be encrypted or it would not function the way intended. Which means at some point an encrypted program needs to be decoded. Wether it is in the instruction control unit or address decode unit or earlier realy does not much matter as the decryption key needs to be known and applied at some point and is thus available to the chip owning circuits.

Currently likewise with regards encrypted data (though this might change in time).

Currently there are some operations that you can do irrespective of the encryption type such as compare (provided it’s sized at an integer multiple of the encryption block size). And others such as add and subtract can be performed on “additive” ciphers provided over or underflow is addressed correctly.

However the bulk of manipulations that you might wish to apply to data cannot be done unless the data is decrypted first. This means that the key to the data has to be known to the CPU.

Which brings you back to the point that the key is available to the owning circuits.

Can the use of multiple CPU’s solve the issue, well yes and no… There are ways of sharing a secret amongst multiple parties without them knowing the whole secret. For instance look at the way a block cipher key is expanded into round keys. In theory each round key is expanded by a one way function from part or all of the actual cipher key. Provided the one way function is one way then it is not possible for the single round CPU to leak the cipher key because it is unknown to it. Further if whitening by stream cipher is employed to the data then the block cipher round CPU even if owned has little or no information to leak.

However you have actually now moved the problem from the block cipher to the stream cipher used for whitening the data both before and after encryption.

So the trick is coming up with some way such that not only is the secret shared only in part with any given CPU, but the CPU does not know what it’s part of the encryption/decryption or other function it is involved with. To do this means that you need another CPU or state machine responsable for dividing the work up amongst the CPU’s but also the data and the key. With care it should be possible to provide data switching etc such that the secret does remain in lots of unknown (to the adversary) pieces, but it is unlikely to be either efficient or fast.

It is however something I am looking into in my spare time as it is an area that interests me as a technical chalenge.

asd May 29, 2011 4:38 PM

@Clive Robinson
“However the bulk of manipulations that you might wish to apply to data cannot be done unless the data is decrypted first. This means that the key to the data has to be known to the CPU.”
I type of understand what you mean, say, the code(binery) is encrypted with a basic encryption apart from the first block(the OS passes it a key), which then can be used to decode a group of instructions upto a call/jmp which then gets overriding and make it point back to the start to then decode more instructions(rinse/repeat)
The code might be xored withe the key,then maybe rotated back to frount.
The key the OS passes is genertated on the spot and before it start the binery it encodes the data.
Even if the CPU can pass the key to a third party, there will be mutliabe keys that get randomly created or even changed by the binery itself and the old key will now longer beable to decode the binery(as long as the computer is runinng).
For data a real encryption..truecrypt or whatever

It should stop exploit in software, unless the shellcode can carry the whole asm to run, as the binery code would be garbage

Dirk Praet May 29, 2011 6:20 PM

@ Clive

“So the next time a requirements spec crosses your desk, get the big red marker pen out and as a first step remove all the marketing kruft/crud and gut it back to the bare minimum/essentials.”

A similar result can generally be obtained by replacing the question “what do you want” by “what are you trying to achieve, and why”. In most cases, this approach is guaranteed to yield an entirely different spec.

asd May 29, 2011 9:22 PM

@Clive Robinson
“However the bulk of manipulations that you might wish to apply to data cannot be done unless the data is decrypted first. This means that the key to the data has to be known to the CPU.

Which brings you back to the point that the key is available to the owning circuits.”

Would a password you enter at the keyboard that gets sent to a program, the program displays a message(if g press t,cessar but more high tec), all the cpu would see would be the cessar cipher(mutlable) and keyboard inputs.
The program might takes input (a hash) from a usb stick to generate “g”-“t”.
How would the cpu know the password. Keyboard inputs? Weakness?
Cheers

RobertT May 29, 2011 9:42 PM

@asd,

Unfortunately with security there are no half measures. If you need to encrypt the program it must use the same strength algorithms as data encryption, otherwise it is only a distraction (there is no real security increase rather you just have one more insecure task to do before obtaining the real program.

Many “secure processors” try to cheat with program encryption and just do something simple like XOR the program op-code with several Keys. This is trivial to undo because the opcode set size is only 8 bits or 16 bits. As a result the encrypted program looks like a simple substitution cypher. Simple frequency analysis techniques tell you which encrypted op-codes are the most common so by comparing this encrypted “most common instruction” with the most common instruction for the processor core you can extract half the program.

The next level up is to XOR the PC address with the data (this is very common) but equally insecure. To do this task properly you need to pick a sufficiently wide ROM Block size (say 256 bits) and encrypt this block. Now this is OK for linear programming BUT it is extremely inefficient for Calls and branches.

Oh in case I forgot to mention it, the decrypt key for the program needs to be stored on the chip, so this takes us back to the original system where the security is limited by your ability to hide the key in the chip circuitry…

For more secure mil type systems the program decrypt key is loaded into RAM at the beginning of the session, here it is typically stored as the sum/difference of several synchronous on chip counters. Usually the individual counter segments are used as-is and the encryption algorithm is designed to do the addition / subtraction within the decode process. This way we never have a plain text copy of the KEY directly stored on chip at any time.

asd May 30, 2011 3:25 AM

@RobertT, would you have a probelm with frequency analysis, if you only ecoded small blocks(bound to be dupiclates code somewere), and would it be possable to use different keys for different blocks, but still beable to decode them latter, without storing to many keys in predictable places in memory(but easyly to find or workout for the code).
Cipher or Hash?
Cheers

AC2 May 30, 2011 3:36 AM

@Clive


Currently if the hardware you use is owned at the chip internal circuit level then it means that in a single CPU system the secret must be known to the CPU therefore the circuitry that makes the chip owned can reveal the secret in some manner.
However the bulk of manipulations that you might wish to apply to data cannot be done unless the data is decrypted first. This means that the key to the data has to be known to the CPU.
Which brings you back to the point that the key is available to the owning circuits.
Can the use of multiple CPU’s solve the issue, well yes and no…

Re data only, one way is an encrypted drive that has an independent user interface for key entry and which handles the decryption itself…

Eg a USB drive with a built-in PIN pad..

http://www.crunchgear.com/2009/04/20/review-lenovo-thinkpad-keypad-protected-usb-drive/

Of course you then have to trust the chip on the drive to not leak the key or get ‘owned’.

And any CPU will have the decrypted data available to it (if not the key), and that gap is applicable to this approach as well..

RobertT May 30, 2011 5:55 AM

@Asd
“would you have a probelm with frequency analysis, if you only ecoded small blocks(bound to be dupiclates code somewere), and would it be possable to use different keys for different blocks”

Yes you can do this BUT this program cypher is definitely not a secure cypher, it is not even close to being secure. So anyone who understands what the was done can decypher the program quite easily. You may say.. so what!, it is the data that must be secure not the program, however I would argue that each layer you add to the decryption problem adds to the time required.

Encrypting the program allows you to hide the nature of the cypher (I know its security by obscurity) but even simple program ossification adds a level of difficulty to the decrypt task.

All of this discussion does nothing to change my original point which is that IF the key is in any way stored on the chip than it can be recovered. So if you cannot maintain physical security of the product than you also cannot be absolutely certain that someone has not extracted the encryption key.

Andrey Belenko May 30, 2011 10:34 AM

@VladK No, because all currently available products (including one you’re linking to) can’t decrypt that physical image they acquire. So you end up with a filesystem image in which files are encrypted.

Jonadab May 31, 2011 5:31 AM

what happens if iOS (N+1) is jailbreak-proof?

Theoretically, I don’t know how they would accomplish that. If the user has unlimited physical access to the device, then ipso facto the user ultimately has control over what software it runs (within limits — e.g., obviously, the software in question must be compiled for the correct architecture). Many attempts have been made to prevent the user from being able to change a programmable device’s programming. To my knowledge, the only such attempts that have succeeded were ones where the device was only produced in small numbers (and thus there were not many people working on the problem).

@ATN: A multi-word password can have considerable complexity even if the attacker has a copy of your dictionary, assuming your dictionary is reasonably large.

The last time I did the calculation, using the dictionary file I had at the time on FreeBSD, umm, version 5, I think, I estimated that a password composed of three words contains slightly more complexity (37.1) than an eight-character password composed of arbitrary printable ASCII characters (i.e., anything you can easily type on a standard US keyboard — complexity 36.4). The dictionary file on my current system (Debian Squeeze) appears to be somewhat smaller, but adding a fourth word more than overcomes that, yielding a complexity of 45.9. The result, four words strung together, is still easier to remember than a traditional eight-character mixed-case alphanumeric password (complexity 33.0).

The disadvantage is that such a password takes a larger number of keypresses to type, which can be inconvenient, particularly for passwords that you type very frequently. But they’re fantastic for passwords that you use more infrequently and have more trouble remembering.

(The complexity formula I used for the above numbers is simple: natural log of the number of possible passwords. Of course, if you choose your password in a way that makes some passwords much more likely than others, then your complexity is reduced.)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.