Breaking Hard-Disk Encryption

The newly announced ElcomSoft Forensic Disk Decryptor can decrypt BitLocker, PGP, and TrueCrypt. And it’s only $300. How does it work?

Elcomsoft Forensic Disk Decryptor acquires the necessary decryption keys by analyzing memory dumps and/or hibernation files obtained from the target PC. You’ll thus need to get a memory dump from a running PC (locked or unlocked) with encrypted volumes mounted, via a standard forensic product or via a FireWire attack. Alternatively, decryption keys can also be derived from hibernation files if a target PC is turned off.

This isn’t new. I wrote about AccessData doing the same thing in 2007:

Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive … everywhere. And it creates a dictionary from that, and feeds it into PRTK.

And PRTK breaks more than 50 percent of passwords from this dictionary alone.

It’s getting harder and harder to maintain good file security.

Posted on December 27, 2012 at 1:02 PM56 Comments


Ryan December 27, 2012 1:14 PM

Sooo… Maintain full-HD encryption at all times, or just don’t bother?

Also, does it scan every possible combination of unencrypted files as possible keyfiles? I highly doubt it.

Kurt Dillard December 27, 2012 1:41 PM

It seems like Elcomsoft glosses over the potential challenges of getting the hibernation files. If you’re using FDE than those files are encrypted, so if you always hibernate or shutdown before walking away from your PC you should be safe from these tools, right?

The Firewire attacks (and other hardware-based exploits) are interesting; so is analyzing the DRAM for encryption keys; but all of these can be mitigated by keeping your PC with you whenever its turned on or sleeping, and waiting for it to fully power off when hibernating or powering down.

Also, I’m not sure how relevant the AccessData tool is, if you’re using FDE then their software won’t be able to find passwords stored on it when the computer is turned off.

It seems like in the long run encryption operations will have to be offloaded to tamper-resistant hardware, but that is expensive.

Moe December 27, 2012 2:19 PM

Move along, nothing to see here.

“If the PC being investigated is turned off, the encryption keys can be retrieved from the hibernation file. The encrypted volume must be mounted before the computer went to sleep. If the volume is dismounted before hibernation, the encryption keys may not be derived from the hibernation file.”

Curmudgeon December 27, 2012 2:26 PM

Dictionary attacks based on printable string searches could be defeated by using passphrases that contain mostly non-printing characters.

Kurt Dillard December 27, 2012 2:27 PM

Richard, BitLocker uses the TPM to store the keys and other information such as data the OS startup files to check their integrity before unlocking the TPM.

Kurt Dillard December 27, 2012 2:32 PM

Good catch Moe. It seems to me that they are confusing ‘sleep’ with ‘hibernation’ too. If the computer is in ‘hibernation’ mode its completely turned off which means that the hibernation files stored on the encrypted system volume would be encrypted. Bruce or anyone else feel free to correct me if my understanding of how these FDE products work is incorrect.

nycman December 27, 2012 2:34 PM

If you had a file encrypted with PGP, but had the file open when an attacker gets access to your computer, would we now say PGP is “broken”? FDE is no different. They all are vulnerable while the disk is unlocked. Which is why best practice is to disable sleep mode, set a timeout to hibernate, strong pass-phrase. The hibernate partition should be encrypted too. For additional security, use FDE and a file encryption product. If FDE is compromised, the individual files will still be encrypted.

Is it possible that malware does a memory dump or extracts the FDE key, sending it to the attacker, while you are working on your computer? The attacker later retrieves your powered off laptop and already has the key. Would a TPM with strong password protect against this?

derp December 27, 2012 2:50 PM

They warn about sleep/hibernation mode right on the truecrypt website. If your machine is off there’s no way to retrieve anything with FDE, especially with DDR3 memory that is immune to cold boot attacks.

Cris Perdue December 27, 2012 3:10 PM

So let’s assume you are using a full disk encryption product and your computer is hibernating. You wake it up from hibernation. Now at this point RAM has no decryption keys in it, indeed is basically blank. Does the disk encryption product ask for your passphrase during wakeup? Or is the hibernation file not encrypted? Or what? An inquiring mind would like to know.


Brad Cable December 27, 2012 3:35 PM

Curmudgeon: most (if not all) encryption software hashes your password immediately, yielding a hex based result which is still easily searched by looking for strings.

Kurt Dillard December 27, 2012 3:38 PM

Great question Cris, I’ve been using BitLocker for years, I have it configured to prompt for the PIN in order to unlock the TPM when resuming from Hibernate. However, you are not prompted for a PIN when waking from sleep mode. I used to use TrueCrypt but don’t recall how it behaves.

Some links from Microsoft on these issues (note the dates too):
– Defense-in-Depth vs. BitUnlocker: How to defeat Cold DRAM attacks using BitLocker, Power Options, and Physical Security
– Protecting BitLocker from Cold Attacks (and other threats)
– Blocking the SBP-2 driver and Thunderbolt controllers to reduce 1394 DMA and Thunderbolt DMA threats to BitLocker

Yuri December 27, 2012 3:57 PM

Even if an attacker has the access to a running computer with mounted volumes, he would still need to break into the system to acquire memory dump. But if he can break into the system, why would he need the keys if he could just double click to access the data ?
In other words – for their product to work, you need a memory dump, but if you have means to get memory dump, you don’t need their product anyway 🙂

Dave December 27, 2012 4:04 PM

Does anyone know if Firewire attacks will be achievable on machines with Firewire converters – say, via Thunderbolt? Wondering if that DMA vector may be disappearing soonish…

gangnam style December 27, 2012 6:33 PM

You can set TC to automatically dismount whenever your comp goes into hibernation mode, though if the entire disk is encrypted, there’s no hiberation files to find.

Elcomsoft is selling regular old memory siphoning software that will suck the memory contents out of a running system to get any keys stored there. There will be nothing there if:

-all containers dismounted
-you set your comp to automatically dismount when entering sleep or hibernation mode
-the comp is powered off
-you aren’t using an encryption program that keeps keys in memory

If you have DDR3 memory, they do not hold any voltage after being powered off which have made cold boot attacks a thing of the past if you’re worried about immediate shutdown and seconds later, seized system and memory analysis.

If you’re paranoid and worried about firewire access you can always write a script that if somebody inserts a firewire device your machine will then either power off or start zeroing out memory and the drive. Meddle around with linux/bsd automount settings

Nick P December 27, 2012 9:32 PM

Nice that there’s something about forensics: my recent paper grab has some nice innovations in that area.

“Bulletproof solution to forensic memory acquisition”

TRESOR-Hunt: Attacking CPU Bound Encryption

I remember Clive liked the TRESOR scheme. I knew it would fall eventually. DMA is a pandora’s box for security issues, it seems.

I honestly have to give it to Clive for coming up with the best idea for COTS HD crypto: combine a self-encrypting drive, an inline-media encryptor, and full-disk encryption. They have to break it all. The problem: the NSA is pretty much the only one making an IME, so must custom make that one. SED + FDE is still an improvement that shouldn’t hurt performance, as SED’s have hardware acceleration.

jordan December 28, 2012 2:28 AM

this is just for windows hibernate or mac sleep also effected ? how to automatically remove hibernate files every time computer boot and wipe out free space ?

A Nonny Bunny December 28, 2012 2:34 AM

I disabled the pagefile and hibernation file on my computer, because I didn’t really see the point of sacrificing a third of my solid state disk. If you have enough memory and your computer boots in 10 seconds, what’s their use?

Andy December 28, 2012 9:54 AM

Guys you don’t get it right?

Hibernate: Some people don’t encrypt the full disk. If you don’t encrypt de Partition where the hibernation file sits, they can get access too it.

If have everything encrypted, then hibernation is safe.

Sleep or Running machines. A cold boot attack WILL work, even with DDR3 they work. And then you can make a memory dump. Done.

Only with ECC Ram, you would more safe. But somehow there is still a lack of ECC in home grade Systems.

Brian December 28, 2012 10:55 AM

This attack, which I agree is an old one, seems like a pretty basic failing of computer hardware security that needs to be fixed before ANY software security matters much. Physically plugging a device into a machine shouldn’t give you complete access to the system. Is this the Star Wars R2-D2 security model or what?

However, I had heard that at least with OSX, the firewire memory dump may not work any more. If the other operating systems haven’t followed suit, it seems like a good time to do so.

yerpa December 28, 2012 2:17 PM

According to the MIT lecture I just saw on signals and systems cold boot attacks are extinct ever since DDR3 ram. They do not hold a charge and are immediately cleared when powered down.

Clive Robinson December 29, 2012 3:59 PM


For all those talking about DDR3 memory loosing everything when you turn the power of I would be cautious about making that statment.

The reason is there are several low level ways data is “residualy retained” in memory that is not powered up.

This has been an industry known issue as far as I’m aware since befor the first IBM PC hit the streets by several years (ie back in the 1970’s).

Whilst modern designs of various types have reduced the effects it is still an issue if other technniques have not been employed.

As noted by @Nick P above I have prefered options that make things a little more difficult for attackers, but I would be daft to call them secure. This is partly because the technology is not being used correctly and partly because of the weakest links in the security chain “user convenience” and “efficiency”.

For instance Keying Material (KeyMat) is going to be vulnerable if it’s entered from a single point (ie the computer keyboard) because amongst other things the OS does it’s best to store the data in half a dozen buffers that often get writtten to disk in part or full depending on what the OS is doing and why.

The NSA Inline Media Encryptor (IME) gets around this single channel issue by having a KeyMat only channel via the crypto ignition keys.

A few external HD’s with FDE use a keypad on the devices to enable you to enter a PIN that then selects or builds the KeyMat which is stored in the drive electronics. [Whilst this obviates the keyboard route all these FDE external HD systems I’ve seen todate are security wise very weak, for “user convenience”].

Likewise software based encryption has issues to do with human failings in the weakness of passwords and passphrases. They like the keyboard buffer end up being stored in system memory in some form that an OS could for one reason or another put onto the HD etc.

Thus I’m a firm beliver in seperate channels for KeyMat at each level (PC, IME, Ext FDE HD). I just think the current KeyMat channel solutions are not upto the job for various reasons that are not technical. Thus it is the suppliers that are letting us down for one reason or another…

Johnston December 30, 2012 4:33 PM

“Elcomsoft Forensic Disk Decryptor acquires the necessary decryption keys by analyzing memory dumps and/or hibernation files obtained from the target PC.”

OpenBSD’s malloc fills junk bytes into allocated and freed chunks via the J option. By default OpenBSD encrypts the swap (where hybernation state is kept), and has done so for many years.

Nick P December 30, 2012 5:41 PM

@ Johnston

Once again, OpenBSD solves a problem many months to years before anyone else notices it’s there. They don’t bother looking for major press. They just find problems, fix them, and move on. Another example, if I recall, was them fixing a BIND problem almost a year before it was “discovered” by security researchers.

PreBootMe December 30, 2012 7:49 PM

My understanding is that this FDE vulnerability is due to lack of pre-boot authentication (PBA).

Without PBA, when the computer boots, the FDE encryption key is automatically decrypted (with no user authentication) and stored in RAM when the computer boots. So, without PBA, if someone steals your computer they can boot it (i.e. from full shutdown) to the Windows OS login, essentially bypassing the FDE.

With PBA, when the computer boots, the user is prompted to authenticate before the FDE encryption key is decrypted into memory.

This is why most (all?) FDE encryption products default to using some form of PBA (e.g. Microsoft recommends using a PBA PIN with Bitlocker for laptop computers).

One question I have is whether the Elcomsoft decryptor works on the GuardianEdge (aka Symantec) FDE product. Does the GuardianEdge FDE product use “trickier” memory obfuscation for the in memory decrypter FDE encryption key?

Perplexed December 31, 2012 6:23 PM


As others have pointed-out:

Isn’t FDE intended to protect data while it is at rest?

Doesn’t the attack in question require either:
-access to the encrypted volume while it is mounted or,
-access to an UNencrypted hibernation file?

As far as the latter is concerned, wouldn’t either:
a) making sure the hibernation file is encrypted (which, presumably, any FDE scheme worth its salt would do by default) or,
b) not using hibernation altogether ,
render such an attack ineffective?

As far as the former</> is concerned, if an attacker has successfully managed to gain access to a mounted encrypted volume, isn’t it already game-over in nearly all cases?

So what is the big deal here?

What am I missing?

Missing nah December 31, 2012 7:32 PM

You are not missing anything. Its a non issue for most. Just don’t use sleep.

I think the bigger issue, at least for individuals, is where to store your emrgancy backup access keys. You can’t just have them in a drawer next to the computer. But if your computer is seized by someone willing to do so, couldnt they take your safe, or suppoaena your depostit box? Yes, seems we are missing the big issue here…

Clive Robinson January 1, 2013 3:28 PM

@ Perplexed,

Doesn’t the attack in question require either-access to the encrypted volume while it is mounted or-access to an UNencrypted hibernation file

Not in all cases, the attack is actually about illicitly obtaining / recovering crypto Keying Material (KeyMat) to access the encrypted data on secondary storage.

So if you think about it, all this attack requires is for the Keying Material (KeyMat) to be in accessable system memory (RAM, Registers, HD Electronics, or I/O or other peripheral devices) in some form usable to the attacker.

If you consider what happens when a COTS PC/Server computer system boots up it has neither an OS or other executable code in it’s RAM, nor KeyMat for encrypted secondary storage.

What usually happens on a COTS system when a hardware reset (power up etc) occurs is,

1) A ROM chip is switched in to the system memory map to run from the “reset vector”.

2) Generaly the ROM has a minimal lever loader that copies the image of some kind of Basic Input Output System (BIOS) from the ROM into RAM at some other memory location and then jumps to execute the image in RAM. Almost the first thing the BIOS image does is to switch the ROM out of the system memory map altogether.

3) The BIOS (image in RAM) then performs some Power On System Test (POST) diagnostics and basic hardware setup including copying any “expansion card” ROM into memory and then looks around for “bootable media”. If no bootable media is found early PC’s like the Apple ][ or original IBM PC dropped into the ROM BASIC interactive interpreter, where as more modern PCs display an error message and halt, whilst some server systems drop into a basic shell on the designated console that allows an operator to select a boot devicee or do other activites.

4) If a bootable device is found (or selected) the computer system will use a bootstrap loader to either load an OS or another bootstrap loader into RAM and start executing it….

But if the bootable device is fully encrypted the system cann’t do this last step (4), so it needs to get the KeyMat from somewhere first…

There are three basic ways it can get the KeyMat,

1, Ask the user/operator to type the KeyMat in on the console.
2, Read the KeyMat from the bootable media/device.
3, Read the KeyMat from a “fill device”.

The first requires considerable skill from the user, the second is obviously insecure and the third requires an additiional secure storage device.

In Mil/Gov/Dip systems only option three is generaly considered and for the NSA devices such as the Inline Media Encryptor (IME) they use one or more KeyMat fill devices called “Crypto Ignition Key/Devices”.

In commercial/consumer grade systems they have int the past usually used options one or two or more recently a combination of both such that the user/operator types in a PIN/Password/Passphrase which then unlocks the CPU bus connected HD (or fill device) and the KeyMat gets coppied from it into system RAM.

From this point onwards the KeyMat is vulnerable to this or similar attacks.

Now as we all know “users love simplicity and ease of use” and they don’t want to be typing in multiple AES keys etc. Thus many modern iA86 PC motherboards come with a secure tamper evident KeyMat storage device which is the MS / Intel Trusted Platform Managment (TPM) device. Whilst other PCs and servers use a crypto smart card or even a specialised RS232 or USB device (some even pull it across a secure network connection using a modified form of NetROM).

Now the TPM has the KeyMat in it and depending on how it’s configured defines the way KeyMat gets into system RAM at boot up.

In many cases TPMs or other crypto fill devices can be set up to just load the KeyMat automatically on bootup with either a minimal (weak) PIN/Password/Passphrase or no protection at all on the (incorrect) assumption that an attacker cannot get it out of RAM without having hardwired physical access…

So just turning a computer on can in many cases cause the KeyMat to get pulled into memory even though it might not let you procead to fully boot into the OS without entering a user name and password etc…

Thus it’s quite possible to have a computer system with high entropy KeyMat stored in a secure module inside it that is used for full encryption of the data on the HD. That in reality is realy only protected by a four digit PIN or short guessable username and password/passphrase. That can usually be “guessed” in some way with access to the hardware (If I remember Correctly Apple phones and other smart phones suffer from this problem).

Worse this weak “user input” can usually be collected with a suitably small device such as a keyboard logger or miniture CCTV camera hidden in a ceiling fixture etc etc or just picking the users pocket, or reading the sticky note attached to the screen/keyboard where the user has healpfully written it down as an aid memoir.

It’s problems such as these that makes the use of FDE and other “data at rest” security systems a bit pointless for the majority of users. In some respects the whole FDE marketplace is not actually driven by “real security concerns” but “checkbox audit concerns”.

Wael January 2, 2013 11:45 AM

@ Clive Robinson

It sometimes helps to understand what threat the TPM in your description protects against. The two main threats are:

1- Removal of the Hard drive from a server and and attempting to decrypt the volume offline (off site)

2- Booting the server or pc from secondary media (cd rom, a linux drive, etc…) then attempting to decrypt the secondary volume.

As far as I know, the TPM does a good job protecting against those two attacks.

Keep in mind, however, that the main purpose of a TPM is not for encrypting a volume — this was an ancillary first commercial use of the TPM (Trusted Platform Module). Refer to our previous discussion about what “Trusted boot” means as opposed to “Secure boot”, or for that matter, the definition of “Security”…

The TPM does not simply store KeyMat, it seals it to a platform state. Without that state (which will surely be the case when you mount the stolen drive on a different server with a different TPM), the KeyMatt will not be available. And it’s decrypted within the TPM. There are various configurations and architectures for using a TPM to assist in full volume encryption. Some implementations are strong, and some are, well… a little weaker…

There are also implemented mechanisms in commercial products to protect against what is known as “reset attack”, where RAM is scrubbed before shutdown, or on power up.

Clive Robinson January 2, 2013 4:36 PM

@ Wael,

It sometimes helps to understand what threat the TPM in your description protects against

Yes it does, and the two cases you mention are by and large not related to stealing the keys from memory. I mentioned it because in a significant number of case the TPM is configured badly so it putst the keys in memory with insufficient safe guards…

That is it does not matter how good or bad the basic strength of your safe is, if the user leaves the keys or combination of easy reach of it.

A big problem that is begining to buble up with security is the issue of “Real -v- Audit” as it’s driver.

If the driver is for “real security” then the processes as well as the technology are covered effectivly by policy.

However if the driver is for “audit security” then the technology will match the auditors check boxes but the processes will almost invariably only be there for “lip service” reasons.

Whilst the increased level in security means a larger market and (hopefully) reduced prices for the technology, the “audit” driver means that the technology will almost certainly be lacking in one or more ways. An example of this is a supposadly secure external HD that used 256bit AES to encrypt the data, however the AES key was stored internaly and only protected by a short length PIN. Thus you have AES KeyMat with ~256bits of entropy being protected with a 4 digit PIN giving only 13-14bits of entropy…

Wael January 2, 2013 5:24 PM

@ Clive Robinson,

I hear you…
You are describing a case, then, when an adversary has physical access to the device and the liberty to use hardware equipment to dump memory and steal a key. Would be easier if they just get to the information they want while they’re at it. They still need to know what the key is for since the TPM has a hierarchy of keys, not a single key.

Almost all bets are off, when an adversary has physical access to the device with this level of attack. By the way, The TPM’s I last looked at were EAL 4+ (Nick P would not like anything below 7, I guess)…

Speaking of drivers, its mainly a check in the box…
‘nough said about that 🙂

Wael January 2, 2013 5:47 PM

I want to comment a bit on this:

An example of this is a supposadly secure external HD that used 256bit AES to encrypt the data, however the AES key was stored internaly and only protected by a short length PIN. Thus you have AES KeyMat with ~256bits of entropy being protected with a 4 digit PIN giving only 13-14bits of entropy…

If the external drive is protected by a TPM, the PIN or pass-phrase alone is not sufficient to break it. i.e, the entropy of the 256 AES has not been reduced to the length of the PIN or the pass-phrase. Here is why:

1- The encrypting / decrypting AES key is itself wrapped by another key internal to the TPM, which is wrapped by another key, and another, all the way to the SRK.

2- The TPM will NOT decrypt a key if the following is not true:
a- PCR’s do not contain the expected values the wrapping key was sealed to
b- correct pass-phrase was not entered.

So you really need to know the pass-phrase and have access to a clone of the computer the external drive was attached to at the time of encryption. That clone includes a clone of the TPM on the original device as well (with its unique SRK — Storage Root Key, which is generated at the time the TPM is taken ownership of), it includes all the option ROMS that were measured and extended to the TPM PCRs (Platform Configuration Registers) — An identical clone (as far as what was measured in these PCRs)

Nick P January 3, 2013 1:39 PM

@ Wael

Good points on TPM. Yes, with regular hackers or thieves, the TPM should be effective at ensuring a trusted state and preventing keys from leaking. I prefer something with a bit more flexibility in what I can do with it, but TPM’s can do this basic thing. If implemented correctly.

@ Clive Robinson

“By the way, The TPM’s I last looked at were EAL 4+ (Nick P would not like anything below 7, I guess)…”

Previously, maybe. No longer now that I’ve seen so much of the history of high assurance and understand it’s economics. I think a high assurance TPM is quite doable, more than before. I’ll settle for EAL5-6 TPM that makes the right choices in how they approach hardware security, software assurance, and security features.

The problem I’ve been having with Common Criteria is EAL’s vs Protection Profile attributes vs Security Targets. The government prefers using Protection Profiles and I think they decided to only use them for high assurance evaluations. The other thing is that, within a Protection Profile, the attributes themselves are implemented at different levels of assurance. For instance, the security kernel mechanism itself might be EAL7, but the drivers or auditing code certainly wasn’t.

So, I’d divide it up as follows. First, the EAL should be EAL5 at a minimum. We have several smartcards and their associated middleware evaluated at EAL5+ with plenty of useful features. The VAMP and AAMP7G processors were formally verified to tremendous levels. We also have three smartcard platforms with EAL7-like development assurance: MULTOS, Gemalto’s JavaCard, and IBM’s Caernarvon. So, there’s no technical excuse that these TPM’s should be EAL4.

So, I’d use commercial best practices for the basic hardware. We can assume it will be beaten by undergrads like most other platforms. Maybe make an optional port to IBM’s FIPS Lvl 4 coprocessor. Anyway, for hardware security, we just need basic tamper-resistance and effort put into it provably resisting software level attacks (i.e. no security critical errata).

Now, from here, we can implement the security-critical parts in hardware or design a new TPM around an EAL5-certified cryptoprocessor (my preference). If it’s hardware, we use existing hardware verification technology for initial design, refinements and testing. We can already achieve good results with that. Galois’s Cryptol supports generating VHDL code from crypto algorithm specs. Tools like Esterel’s SCADE or similar event-driven architecture might be used for their certified code generators.

If it’s the other choice, the TPM would be software: what runs on cryptoprocessor and what interfaces to it (mostly drivers). We have tech now to formally specify and prove assembler, interrupts, concurrency, and drivers. We need to use it to some acceptable degree. NICTA was also working on auto-generating drivers from hardware specs, but idk their progress.

(Note: We don’t necessarily need a cryptoprocessor if we’re not worried about high-tech physical attacks. We can use a robust SOC like Freescale offers with onboard crypto mechanisms such as TRNG. That makes things MUCH easier.)

A side advantage of using a software-centric TPM design is that it can be field-updated. I’ve detailed methods before here and otherwise on how to build a high assurance boot and update mechanism. There is also a commercial I.P. available for that from one vendor, but idk it’s actual security.

Another advantage is flexibility. The same basic hardware can be reused for other security functionality. An example would be for a recovery-oriented architecture where the untrusted system keeps telling the trusted coprocessor which memory locations are immutable or about dataflows. The coprocessor can periodically check to ensure operations are going appropriately. The TPM and these other products likely share the need for trusted boot, storage, crypto primitives, RNG, etc. A common platform keeps costs down.

For assurance, I’m fine with gradually increasing assurance. That concept existed as far back as MLS LAN. So, right now, we use minimal verification technology to catch low hanging fruit. We apply existing, high-quality commercial IP to produce an interim product. (Or we just keep existing TPM’s that are good enough.)

Then, we start a new bottom up design using high assurance principles. Make it modular so we can assure components individually and their interactions. Gradually improve the platform. Every now and then, a release can be made to generate revenue to continue support. This is how Karger recommended doing EAL7-type projects and worked for the Caernarvon project. Whatever they build, it should be at least EAL5+ overall and crypto/rng/interface should strive for EAL7. Mixed assurance is fine with me so long as we get the right mix.

Wael January 3, 2013 6:09 PM

@ Nick P

I think a high assurance TPM is quite doable, more than before…

I think so too. The problem is not technical Per Se, I think it’s economic. TPMs were designed to be cheap devices (sub $1 – often a lot less) – I have not looked at TPMs in over 5 years, so my information maybe dated. Computer manufacturers cannot afford pennies these days; an increase of say 5 cents to the price of a TPM is unacceptable! There are other factors too. Will leave that to your imagination 🙂

Nick P January 3, 2013 10:58 PM

@ Wael

Wow, I didn’t know they were under $1. That’s priced more like some of these cheap microcontrollers. This explains the lower flexibility, features, performance and/or assurance.

Alright, so maybe the average vendor won’t buy it. Maybe they might go for it in a solution they can mark up on a premium. The Dell “Secure Consolidated Client” for MLS is an example: it’s a Dell computer with INTEGRITY-178B and some custom software. I know Green Hills isn’t exactly cheap. They pay the extra cost b/c they can charge more for the system.

So, the trick seems to be finding a way to market systems that have an extra level of assurance. It might also help to develop a bit of an ecosystem of products around the TCB. Examples might be monitors, digital signature systems, authentication, etc. Not sure how to do that, so I guess this stuff remains in limited markets or on custom stuff I come up with. 😉

Note: Chips like Freescale iMX have many nice security features and plenty of speed, yet cost around $5 for at least 100. I keep thinking about using them. I prefer PowerPC, but there’s plenty of ARM software out there due to smartphone wars.

Privacy =/= Impropriety January 4, 2013 12:27 AM

As somewhat of an aside, it would seem to me that there could be benefit in encrypting as much of one’s data as possible– no matter how non-sensitive and inconsequntial.

Some reasons:

-Encrypting only sensitive data flags it as such
(This would also seem to be a valid argument for using Tor and even SSL as much as possible– even for one’s most mundane and neutral traffic.)

  • If one’s encrypted data differs significantly from one’s unencrypted data, detecting such discrepancies may be possible and may at least make easier attacks on the encryption

Nick P January 4, 2013 1:17 PM

@ privacy/impropriety

That is why cryptogeeks have always argued for pervasive use of crypto.
The problem you highlight was a reason RubberhoseFS and their ilk failed: the only people using them definitely had something to hide.

The OpenBSD and anonymity tacks are the only groups I recall that successfully pulled off the pervasive crypto concept.

RobertT January 4, 2013 11:49 PM

FDE is easy its KeyMat handling that’s the hard problem. Average consumers have the weirdest mutually exclusive concepts of regarding data security. In the one instance they want security that is LEO unbreakable but at the same time they want some method by which HD manufactures and even local PC repair spots can recover the data after they loose the key. So consumers only really want to feel good, they read lots of FUD that they can barely understand and they want to do something, sorry but that’s the market for FDE, the do something crowd.

The TPM product that meets the do-something requirements is absolutely useless for serious EAL whatever applications. Now TPM do-something can easily be a sub $1 product BUT the serious EAL5/6 TPM will need to be a $10 or more product because it will never ship in Million unit quantities. As the volume shrinks the price skyrockets, so by the time you address serious EAL7 you are lucky if the identifiable volume is 1K units.

BTW it will take you over a year to obtain EAL5 certification so even if both do-something and EAL5 are the same basic part there will be a significant development cost difference for the EAL5 part.

Clive Robinson January 5, 2013 2:26 AM

@ Random User,

The EFF, among many others, clearly do not agree with the conclusion of Clive Robertson

First off when you quote me please do so accurately and in context, otherwise it leaves you open to various accusations. Oh and getting my naame wrong does not help your case either.

What I actualy said was,

It’s problems such as these that makes the use of FDE and other “data at rest” security systems a bit pointless for the majority of users. In some respects the whole FDE marketplace is not actually driven by “real security concerns” but “checkbox audit concerns”

So as can be seen before you modified what I had said and put it out of context for the sake of your argument, I had made my comment not as an open ended statment as you had deliberatly made it but bounded by what I had stated above it in my reply to “Perplexed”.

As can be seen with my opening reply to Perplexed’s question,

So if you think about it, all this attack requires is for the Keying Material (KeyMat) to be in accessable system memory (RAM, Registers, HD Electronics, or I/O or other peripheral devices) in some form usable to the attacker.

I was limitting my context to this new attack.

The EFF posting you link to was from back in 2011, and what they quote as the material they used as it’s foundation is thus even older.

As Bruce has noted a number of times in the past, attacks only get better with time.

But you have further not compared what I have said with what the EFF have said.

My comment was in two basic parts the first described how most computer systems (PC/Server) get to the point of using a fully encrypted device the second part about how systems fail because of,

… as we all know “users love simplicity and ease of use” and they don’t want to be typing in multiple AES keys etc.

As Bruce has also noted in the past users regard security as getting in the way of the work they are paid to do and by which their performance and thus employment, potential bonuses and promotion are judged by their managers (not tthe IS staff). Thus it is in a users self interest to set their priorities as appropriate and minimise the impact or inconveniance any security measure may impose.

Thus if the system alows it short easily broken passphrases, passwords and PINS are to be expected as the norm rather than as the exception.

I then went on to describe various ways that system manufactures had designed systems to “help the user” and how due to poor configuration and use, the high entropy of the AES KeyMat could be reduced to just a few bits of a four digit PIN.

As Bruce has also pointed out in the past security is a chain the strength of which fails on it’s weakest link. Which would in this case be the pass phrase, password or four digit PIN.

As you may or may not know the benchmark for crypto attacks is a “Brut Force Attack” which is basically trying each possible Key in turn untill you find the right one. Now trying 2^128 guesses on an AES key is currently thought to be not possible in the expected remaining life time of the Universe. However going through the 2^13 bits of a probable four digit PIN can be done in just a few seconds if you can get appropriate access to the points in the computer system motherboard [1]. Which if you do will release the AES KeyMat into the computer system memory for the attacker to harvest at which point it’s “Game Over” for the prevention of access to the “data at rest” under FDE.

Now if you go back to the EFF articale you will see they very specificaly go over the generation and use of strong passphrases. It has it’s own highlighted section that begins with,

Full-disk encryption is most effective if you make a strong passphrase…

So rather than the EFF being in disagreement with what I’ve said they were in fact stressing the importance of not making the mistake I was highlighting as being a major “real world issue” with current FDE system usage.

So you could further say the EFF were being up beat about FDE due to their assumption users would follow all their advice ad do the right thing, and I was being pesemistic about users not doing all that was required.

I would say in my defence that I think the EFF were being overly optomistic about users and due to practical experiance of myself and others including Bruce, I am being realistic about what many users will do in practice.

In many ways security from a users perspective is much like many new year resolutions like “I will get fit” they spend good money on equipment to help them achive the goal but then don’t use it properly so get no real benifit from it…

So just to make it clear for you,

The weakest link in the security system defines it’s strength, and the point at which an attacker will attack, if they are aware of it, and can exploit it. Due to various human failings humans will modify systems to make their life as simple as possible so that they can get on with their work. Such modifications are almost always to the detriment of system security.

And finaly the only real difference between the time when the EFF wrote their piece and the time I wrote the above, is there is now an acknowledged method by which an attack can be fairly easily mounted against an FDE protected system such as a laptop, by Government and LEAs against individuals, because ElcomSoft has for commercial reasons decided to release a cheap tool for them to do so.

[1] This is a currently open field of research but put simply if the TPM on your system motherboard is a seperate chip to the one doing the FDE then the KeyMat has to travel across the wire in some way. Whilst it’s possible to encrypt it in some way many systems to date have failed to do it sufficiently well to prevent them being broken which has happened several times. Further the TPM is also susceptable to software injection attacks [2] even if it is not a seperate chip,

[2] Many TPM’s are vulnerable to software attacks because of the way they check code that is to be executed by the system CPU. In essence it’s a form of code signing, where you hash the memory image as it’s loaded and check the hash against a signed version. As this is computationaly expensive it’s often only done on loading and not during run time. If an attacker can change the code on such a system after it’s checked neither the TPM or CPU will know the code is nolonger trusted and it’s game over. There are a number of ways this can be done the two primary ways are via DMA [3] and Interupt handlers [4].

[3] On nearly all system motherboards there are one or more Direct Memory Access (DMA) controlers used to quickly move images of memory around system memory without causing the CPU to slow or halt it’s execution. In essence they are like a very limited function co-processor who’s function is to copy the information from one location in main system memory to another. In some systems DMA is available via an external interface and is used by high speed high capacity memory devices. The problem with this is that such interfaces are usually poorly protected and an attacker with a quite simple device can plug it in and change the information stored in any memory location in memory. Thus it’s fairly simple to read out the system memory image and analyse it on another system and then write back what is in effect malware that both the TPM and CPU assume is trusted code. This attack is in effect at a level bellow the CPU in the computing stack and ias such is one in many in the vast gulf between hardware design and software that as I’ve indicated a number of times in the past on this blog is not currently protected in any way.

[4] DMA is not the only way an attacker can get at system memory to change signed code that has been checked by the TPM / CPU. Most CPU’s have facilities for dealing with Real Time Events that occure asynchronously to the executing code on the CPU. In many cases these events need to be serviced so rapidly that dealing with them in a round robin way with software polling is not viable. To do this the hardware raises an exception and causes the CPU to switch from ordinary processing of data to an exception handler. Because of the nature of these exceptions they are given their own name of “Interupts” whilst originaly designed for hardware they have long since been used for software as well and now form the bed rock of communicating data from user execution space to kernal protected execution space. And thereby open up an attack vector via the interupt handler. Originaly computer system suppliers designed all of the I/O devices and wrote the interupt handlers “in house”. However that all changed back in the 1970’s when other people started designing add on hardware and usually wrote the Device Drivers themselves to work with an API developed by the OS kernel designers. As we know it’s possible to put in software hooks that will pass a code review process by the OS designers organisation and thus be signed as trusted code. But like all usefull tools these software hooks can be used for other nefarious activities such as changing memory that is assumed to be trusted.

Clive Robinson January 5, 2013 10:00 AM

@ Nick P,

I remember Clive liked the TRESOR scheme. knew it would fall eventually. DMA is a pandora’s box for security issues, it seems

I was not ignoring you but waiting for the thread to die down a bit before replying, as my posts have generaly have some length to them (as this one will 😉

If you view the security stack in computing at the bottom of what you might call the physical layer is quantum effects working up through basic device physics transistors logic gates etc. At the top of the traditional ISO OSI seven layer model you have the apllication layer, but as we know from “layer 8/9/10” conversations it goes all the way up into politics.

If you view this stack you discover some quite unpleasant truths, for instance I remember your conversations with @RobertT over the lack of security in the chip design process and your surprise at just how little security there was.

As for the political process well I guess most of us will never realy understand what goes on in there. The machianations of the various performing arts rights holders is enough of an indication that “Money Talks, and civil rights walk.”.

But what many people don’t realise is the dependance relationships of security that is the application layer security is based not just on the layers beneath but the policy layers above.

That is you can undermine computer security as expected from beneath but also from above via protocols standards and upwards to politics and legislation like CALEA.

The thing is many people don’t realise there is a large gulf of no security below what they consider the physical layer that is above the basic logic gate level but below the CPU level they see in abstract terms through software.

If you think about the hierarchie of an actual computer system we have the CPU as the boundry for software. Thus this is where most code cutting software developers consider the “be all and end all” of their mental map. They don’t consider that there are layers beneath such as the memory and IO that can change behind their back and thus effect what they do and make their mental security models entirely invalid.

DMA is just one such way of manipulating the memory beneath the CPU level there is also the Memory Managment Unit and the I/O mechanisms that rule the roost via the interupt mechanisms that run at what you might consider Ring -1. And these are what you might consider the upper layers of this gulf of insecurity. There are other tricks such as the actual CPU microcode or equivalent in all the other state machines.

Normaly we don’t consider them in the security model both currently and in times past. The obvious question thus arising is why?

Well it is because of the idea of “Physical Access Control” preventing “front panel access”. That is the computer room was an “outer perimeter” that could be much more easily defended by the physical means of bricks, mortar, doors, locks and human security such as guards and 24×7 resident operators than it could by using the hardware and software of the system.

Sadly whilst back in the 1960’s and 70’s they were well aware of these below the CPU problems they did not have the computing hardware resources spare to do anything about it. Thus the prevailing view was you had no choice but to do it by physical access control. And with the very resource limited big iron systems of the time running in “time share” not “resource share” this was not realy much of an issue, you just changed a tape or disk pack and reloded the memory.

But the prevailing view became a mind set and few remembered why it just became “this is the way we do things”. Hence as the switch from “time share” to “resource share” happened the development of security still remained firmly set in “big iron” “physical access control” and the outside the perimeter terminals were dealt with by development along the lines of “trusted path”.

Then back in the 70’s and 80’s the physical access control security model broke down, the first Personal Computer resources started that eventually resulted in the demise of big iron and terminals. But the security model and ideas appeared stuck in the big iron model and incapable of keeping up with the changes. Thus rather than put even rudimentry security in the PC the user simply took out the floppy disks put them in a box and into the fire proof safe. Thus in a perverse way the big iron access control security model lived on.

It was not till the late 80’s and early 90’s when PC’s had HD’s as the norm and business networks started to be not just on LANs inside the physical perimeter of the building walls, to WANs outside the physical access control perimeter that people started to wake up and we got the ideas of Firewalls, DMZs, Bastion Hosts etc. But the “big iron” mentality still remained and thus we now have VPN’s, that still have the underlying assumption of “physical access control” and “trusted paths”…

But now we have to deal with the likes of laptops and smart devices the “physical access control” model is clearly broken and only now are people waking up to the fact that the “big iron” security model of “physical access control” does not apply any more…

The upside is that now unlike back before the 90’s we have the hardware resources to deal with the issue. But the hard won expertise of the 60’s and 70’s is long since retired or deceased and the vital papers are scattered in dust covered boxes in peoples lofts and garages and old archives and dark basements where nobody goes any longer.

Many of the papers were restricted or higher security rating paid for by various Governments who have long since tucked them away in locked archives where they probably won’t see the light of day untill after the ideas have been re-invented.

In many respects our Governments have become like the Holy Roman Church, jealously hiding information and burning heretics who try to get the message out. But history tells us they are fighting a losing battle. Whilst they might be able to control “ideas before their time” these ideas will happen to others when their time comes, and it is obvious there is a problem that needs to be solved.

We can see this actually happening currently but there is still the question of the effect of those Layer 8 and above issues pushing down. We saw in the past Crypto was in effect outlawed (it’s use even carried the death penalty in many countries and still does). As I’ve said before the game has moved on the likes of the NSA et al are playing with protocols and standards that become engrained and difficult to change or replace.

But the likes of the DOD have woken up to the fact that not only has the game changed but the “bat and ball” have been taken away from them.

The Gulf in security I’ve mentioned is now hurting the DOD and they are actually putting their hand in their pocket to get industrial and academic researchers to come up with ways to close it down, if it can (which I don’t think it can in a hard assured way).

COTS technology is here to stay, not just in the private sector but all the public sectors including all areas of the Government and thus military, diplomatic and intelligences services. The old days of government contracts to build secure systems are long gone and won’t come back in the way they once were. This is simply because the Western Nations have let the technology ball drop and those in the Eastern nations have taken it back to play on their home turf.

Thus what to do?

Well what might happen is the development of “key components” on home turf. That is to use what little fabrication is available in Western Nations to develop silicon that sits as guardians in the system controling information by segregating the untrusted parts and how information flows between them.

Will such systems be developed I suspect the answer is yes, simply because trying to extend security down into the memory layer is a difficult task all the current ideas about “tagging” memory also fail to methods that are known to exist in this security gulf. And if you think about it to be of use still DMA has to be able to change those tags…

As I’ve indicated in the past one of the ideas behind “Prisons-v-Castles” is “probablistic security” that is you assume you just won’t be able to close the security gulf with hardware solutions you will need to find some other way to mitigate untrusted hardware.

The question that is arising is, ‘Is the computing industry ready to grab this thorny branch of how to make untrusted components work in a trusted way? or continue to ignore it?’.

I think it can be done and I see from one of your comments on another thread that you are thinking about it. Well I should issue a health warning, that the idea is like a wart, in that not only does it grow on you, the more you scratch it the faster it grows 😉

But all that said I still like the basic idea behind TRESSOR, it does what it says on the tin, the problem is that it is not the basic design of TRESSOR tha is wrong it’s the basic design of the CPU that is at fault. If Intel make minor changes to the CPU design such that they add “write only registers” to the CPU to do the crypto then TRESSOR-Hunt would not work as changing the memory irrespective of how would not alow the KeyMat to be read back out of the CPU. It is not as though such write only registers are unknown in CPU designs.

Also with regards to using TPM for FDE logically it should not push KeyMat back to the CPU, the way it should work is the CPU pushes an identifier and passphrase etc into the TPM. The TPM then sets up it’s internal crypto to use the key coresponding to the identifier. The KeyMat thus stays inside the TPM chip and the CPU funnels data for the HD through the TPM which acts as the guardian sitting on the information flow, which is in effect what the NSA IME does. So neither the CPU or system memory gets to see the KeyMat after it’s loaded into the TPM device.

But can the KeyMat be moved across system buses that could be probed without it mattering?

The simple answer is yes, provided both ends of the communication can generate sufficient entropy then there are known protocols for the two ends to create a shared secret with their inter communication being fully monitored.

Such protocols can and have been built into smart cards, thus designing and making your own IME is possible and attacks on the system CPU and Memory won’t alow access to the KeyMat.

Nick P January 6, 2013 1:07 PM

“Will such systems be developed I suspect the answer is yes, simply because trying to extend security down into the memory layer is a difficult task all the current ideas about “tagging” memory also fail to methods that are known to exist in this security gulf. And if you think about it to be of use still DMA has to be able to change those tags…”

That’s why IOMMU’s were invented. Intel and (i think) POWER already have them. I think some validated reference designs, put in public domain, that could be adapted to other processor architectures is a nice NSF grant opportunity. I think we’ll still see such issues in very constrained embedded systems, though.

“The question that is arising is, ‘Is the computing industry ready to grab this thorny branch of how to make untrusted components work in a trusted way? or continue to ignore it?’.”

Actually, you could say they’ve been doing that all along. Almost everything they produce is untrusted software trying to perform trusted functions, with untrust[worthy] security software enforcing behavioral policies on untrust[worthy] apps. You could say there’s a sane way to meld trusted/untrusted parts and an utterly ridiculous way supported by market/user ignorance. They’ve taken the route we’d predict.

As for good melding you’re talking about, it’s been going on in a very limited way in academic circles (and some products) for decades. I’ve been seeing more work than usual in past 5 years than 90’s, although it’s still plenty re-invention. I saw a new solution for stopping buffer overflows, for instance. They would have been more productive if they Google’d MULTICS, then used grant money on something NEW.

“I think it can be done and I see from one of your comments on another thread that you are thinking about it. Well I should issue a health warning, that the idea is like a wart, in that not only does it grow on you, the more you scratch it the faster it grows ;-)”

You must have forgot I’ve been obsessing over this stuff as much as you for years now, incl. on this blog. I know more now than I ever did. Yet, the goal seems more difficult now than ever. I’ve almost burned out trying to fight the momentum of market and industry. It’s why I’m in favor of legal incentives: the market doesn’t [usually] produce high quality or security on its own. Our hope is in seeing how this worked with Orange Book mandate leading to B3/A1 systems & recent DO-178B/DO-178C leading to dozens of high[er] assurance offerings.

” If Intel make minor changes to the CPU design such that they add “write only registers” to the CPU to do the crypto then TRESSOR-Hunt would not work as changing the memory irrespective of how would not alow the KeyMat to be read back out of the CPU. It is not as though such write only registers are unknown in CPU designs.”

I had never heard of write-only registers. It is known that processors in the past had more innovative hardware with respect to security. Intel also had processors with advanced security models. However, the market demanded the other ones. I can’t really blame Intel: they continue adding security features wherever customer demand justifies it, incl. IOMMU & record speed TRNG. Incentives…

The real secure processor designs right now are Aegis, SecureCore, and SecureME. The people looking into randomized caches have moved onto partitioned caches for performance and simplicity. The latter isn’t so bad an idea as TCB’s are usually divided into trusted and untrusted. It’s not enough for isolation between untrusted apps, though.

“Also with regards to using TPM for FDE logically it should not push KeyMat back to the CPU, the way it should work is the CPU pushes an identifier and passphrase etc into the TPM. The TPM then sets up it’s internal crypto to use the key coresponding to the identifier. The KeyMat thus stays inside the TPM chip and the CPU funnels data for the HD through the TPM which acts as the guardian sitting on the information flow, which is in effect what the NSA IME does. So neither the CPU or system memory gets to see the KeyMat after it’s loaded into the TPM device.”

I have to agree with Wael in that TPM’s aren’t made for that. TPM’s are designed to seal secrets to a certain system state. This tech is used to bootstrap software security solutions. The superior method you are mentioning is a crypto coprocessor. It may be a certified cryptochip or FPGA, but it keeps the secrets while doing the crypto heavy lifting. This was used in the LOCK platform I occasionally mention, along with hardware reference monitor for memory access.

The main reason they don’t do that is cost and legacy. TPM’s, Wael tells me, are ridiculously cheap. They also don’t really change anything else about how the computer works. That probably makes integration incredibly easy. So, the TPM economic model seemed to be: we have a product that is useful to those who want it, invisible to those that don’t, easy to integrate, standardized and doesn’t affect your costs.

I still think, with help from hardware guys, we could rig up a better TPM than TPM using crypto-coprocessor like you (we) want. I was considering using one of the Freescale PPC boards with security enhancements to make it simpler. (Also, most crypto comes with x86 or PPC code.) Wire that joker in through memory bus, PCI bus or something. Offload some TCB to it. We’re set against regular attackers.

“The simple answer is yes, provided both ends of the communication can generate sufficient entropy then there are known protocols for the two ends to create a shared secret with their inter communication being fully monitored.”

Definitely. NSA Type 1 devices support this now, I think. I know they have the Firefly protocol for key exchange. Many commercial cryptochips have features needed to do this.

“Such protocols can and have been built into smart cards, thus designing and making your own IME is possible and attacks on the system CPU and Memory won’t alow access to the KeyMat.”

I should have read the rest of your sentence first. We were thinking on the same page. 😉

Clive Robinson January 7, 2013 5:38 AM

@ Nick P,

I know more now than I ever did. Yet, the goal seems more difficult now than ever.

Ah ha, you have passed an important step on the path to enlightenment 😉

There is an old story about the various stages of a designer,

When a problem is described to them they say as a,

Young Designer :- No problem, I’ll get right on it.
Mature Designer :- Hmm I can see some issues, tell me more about your requirments.
Guru :- Hmm I don’t think you understand what you ask, walk with me awhile.

I’ve always found the more I know about the surface (froth) of a subject the more I need to know what is below. I’ts one of the reasons I tend not to try and keep up with trends which are often skipping the surface and frequently based on ill found assumptions. I try instead to get to understand the provable fundementals instead, and then build on those.

Speaking of ill found assumptions you mentioned IOMMUs. Like all MMU’s they can indead offer increased security but only if used correctly and the majority of the time that’s the catch.

The problem with an MMU is two fold, the first is what controls it, the second is the issue of the memory that holds the page translation tables.

With single CPU systems there is no choice, the CPU controls the MMU and it’s control busses form part of the CPU side memory or IO map. Usually to make things easy with virtual memory handling, the MMU is either at a hard CPU side address or put in the IO map at a hard address. Likewise the page translation tables.

Obviously whilst this makes things easier for the system designer and kernel developers it makes a hard target for attackers…

If there is a design flaw in the kernel then an attacker can get either at the MMU control lines or at the page translation tables in memory, at which point it’s game over. As I’ve indicated up this thread there are various ways both above and below the CPU level in the stack by which such attacks are possible.

Unfortunatly this problem often get’s carried over into multiple CPU systems, that for various reasons give kernel level access to any of the CPUs.

My prefered solution is that none of the user side CPU’s or for that matter the main kernel CPU(s) have any access to the MMU control lines or memory holding the page translation tables.

That is the security hypervisor controls all MMUs and the page translation tables are stored in it’s private memory area that is entirely seperate from user/kernel memory. That way an attacker cannot see the MMU or it’s tables let alone make any changes directly. This issolation can be further improved by making the hypervisor a Harvard architecture and ensuring the code base is a simplified state machine (all of which we have discussed in the past).

Providing the hypervisor is correctly designed it takes compleat control of memory provision and memory is provided to each user process based on the hypervisor security criteria. This enables the memory allocatio process to be massivly simplified thus reducing complexity and code and thus bugs and attack vectors. The biggest issue becomes that of priority queing to avoid locks for essential processes which can be done by a simple and robust heap mechanism that recognises not just priority but dependancy.

With regards write only registers they go back to the early days of ALU design pre bit-slice processor, and the micro coded state machines that turned them into usable CPUs [1].

A write only register is in effect a D-Type latch, a read write register is the D-Type latch and a tristate buffer feeding back the latch ouput back to the data bus at the input along with some arbitration logic for read and write opperations.

Obviously there are considerably less gates and metalisation with a write only latch thus they were prefered by the resource limited hardware designers. The problem was with writing code, with a well developed state machine such as used in the CPU state machines the microcode had no reason to readback an address or data latch etc. Not so general computer application level software developers their inability to take responsability for maintaining accurate state information within their programs ment reading back control registers and the like became a requirment [2].

As for TPMs yes Wael is right many are of limited functionality at best, and thus have failings that alow people to get around their supposed functionality.

What we need along with IOMMUs is what you might consider to be IOIMEs that is each IO channel is in effect a stream that has an IME sitting as a guardian astride it. The TPM would work as I described and would logicaly be built within the IME or use an entirely seperate channel to get KeyMat from a fill device such as an appropriat EAL smart card.

As @ RobertT has noted as well as Bruce, whilst we have adiquate crypto alogrithms these days it’s the KeyMat handling that lets us down badly and for some reason it’s another area where there is a large gulf in security.

And before anybody says PKI, it’s ill thought out and thus brings as many if not more problems to the table than it solves. Like code signing and other trusted initiatives PKI is based on a number of assumptions that at best only just hold with a following wind. For instance consider the case of initial pairing of Alice and Bob, how does Alice know she is actually talking to directly to Bob and not just some Man In The Middle bridge device that has it’s own PK Cert that claims it’s Bob which Bob is inadvertantly going through (think issues with certain Governments and Fake certificates to catch disident Facebook user etc). You still end up with the need for an initial trusted path between Alice and Bob to transfer their respective public keys, and further this should preferably be “out of band” signaling.

[1] For those that have never seen the logic design of an ALU look up a 74181 or any of the later 2900 series bitslice processors from AMD, I won’t bother giving ECL parts as they realy are difficult to get data sheets on and can twist your mind trying to understand the whys and wherefors of them.

[2] I suspect that some people will think I’m being a little harsh on code cutters who due to the deficiences of other code cutters to accuratly maintain state, have to go out and find it for themselves, especially when dealing with the likes of exceptions. I’m not, it’s the poor design of the software that is to blaim, if they had to maintain state in the program then the program dessign would be more rubust and in all probability less complex and less bug ridden. For some reason nearly all programers are “optomistic” in that they assume exceptions are not going to happen and thus state does not need to be kept. Thus when an exception happens either or both data is lost and the program crashes, neither of which are good especialy when it’s the brakes of your car the program is controling…

Wael January 7, 2013 12:55 PM

@ Clive Robinson, @ Nick P

As for TPMs yes Wael is right many are of limited functionality at best, and thus have failings that alow people to get around their supposed functionality.

I didn’t say that! I am just saying they are designed with a price point in mind to serve a specific purpose. They act as the root of trust for a computing device. If you think of them as a crypto-processor (or accelerator), then of course they are very limited.

Here is a cut and paste from the faq on TCG site:

Do the TPM specifications require a certain cryptographic algorithm (DES, AES, etc.)?
Yes. They require RSA SHA-1 and HMAC. AES is not required in v1.1 of the specification, but may be required in future versions. The use of symmetric encryption is not required in the TPM. TCG will continue to evaluate developments in cryptography.
How do TPMs compare with smart cards or biometrics?
They are complementary to the TPM, which is considered a fixed token that can be used to enhance user authentication, data, communications, and/or platform security. A smart card is a portable token traditionally used to provide more secure authentication for a specific user across multiple systems, while biometrics are providing that functionality in an increasing number of systems. Both technologies can have a role in the design of more secure computing environments.
What role does Trusted Computing and the TPM play in authentication?
The TPM provides secure storage and key generation capabilities, similar to other hardware authentication devices, so it can be used to create and/or store both user and platform identity credentials for use in authentication. The TPM can also protect and authenticate user passwords, thereby providing an effective solution of integrating strong, multifactor authentication directly into the computing platform. With the addition of complementary technologies such as smart cards, tokens and biometrics, the TPM enables true machine and user authentication.

And here is a brief of the “purpose”, copied from TCG portal:

Systems based on Trusted Computing:

Protect critical data and systems <b>agains</b> a variety of attacks
Enable secure authentication and strong protection of unlimited certificates, keys, and passwords that otherwise are accessible
Establish strong machine identity and integrity
Help satisfy regulatory compliance with hardware-based security
Cost less to manage, removing need for expensive tokens and peripherals </i>

Seems they have a typo — if any TCG contributing members are reading this, please fix the “typo” — Against, not agains 🙂

Refer to for more information. You will need some time going through a few thousand pages of specifications 🙂

Finally, @ Clive Robinson…
Did you mention Castles-V-Prisons again? 🙂

Nick P January 7, 2013 12:59 PM

@ Wael

Appreciate the reply and information.

“Did you mention Castles-V-Prisons again?”

He did… don’t get him started…

Mikey January 16, 2013 11:34 PM

As a lurker of many, many years I just want to add that I think one of the best things about this site is not just the articles or opinions that Bruce posts; but the discussions in the comments.

I think I’ve learnt more from Clive Robinson, Nick P, et al, than from most of the textbooks and reference guides collecting dust on my bookshelf that I used to pick up my alphabet soup of certs and qualifications.

Patrick Molinelli October 31, 2014 7:17 PM

Mr. Schneier,

I am a student at University of Maryland College, about to get my degree in CyberSecurity. The whole field of forensics and hacking is fascinating to me, and continues to be more exciting as time progresses.

I have read several articles and written a few papers on the forensics of various digital mediums, and just finished reading your article.

I was wondering….if a computer or laptop is bit-locker encrypted, and cleanly shutdown, and the individual never hibernates the system, uses new DDR3 memory, and has a very lengthy complex password that is not saved on an external medium or saved onto a file on the computer or laptop in question…..would this be the forensic investigators nightmare?

It seems most methods involve pulling information from a running computer, or rely on pulling data from the memory from a system that was hibernated, or shutdown incorrectly, etc.

I saw that there was a study done that showed certain information like passwords were stored for a short time in the ram for older types of memory, but that the newer memory is not susceptible to this type of cold boot attack.


Patrick Molinelli

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.