Can the NSA Break Microsoft's BitLocker?

The Intercept has a new story on the CIA's -- yes, the CIA, not the NSA -- efforts to break encryption. These are from the Snowden documents, and talk about a conference called the Trusted Computing Base Jamboree. There are some interesting documents associated with the article, but not a lot of hard information.

There's a paragraph about Microsoft's BitLocker, the encryption system used to protect MS Windows computers:

Also presented at the Jamboree were successes in the targeting of Microsoft's disk encryption technology, and the TPM chips that are used to store its encryption keys. Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer, which could be used to access otherwise encrypted communications and files of consumers. Microsoft declined to comment for this story.

This implies that the US intelligence community -- I'm guessing the NSA here -- can break BitLocker. The source document, though, is much less definitive about it.

Power analysis, a side-channel attack, can be used against secure devices to non-invasively extract protected cryptographic information such as implementation details or secret keys. We have employed a number of publically known attacks against the RSA cryptography found in TPMs from five different manufacturers. We will discuss the details of these attacks and provide insight into how private TPM key information can be obtained with power analysis. In addition to conventional wired power analysis, we will present results for extracting the key by measuring electromagnetic signals emanating from the TPM while it remains on the motherboard. We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace.

The ability to obtain a private TPM key not only provides access to TPM-encrypted data, but also enables us to circumvent the root-of-trust system by modifying expected digest values in sealed data. We will describe a case study in which modifications to Microsoft's Bitlocker encrypted metadata prevents software-level detection of changes to the BIOS.

Differential power analysis is a powerful cryptanalytic attack. Basically, it examines a chip's power consumption while it performs encryption and decryption operations and uses that information to recover the key. What's important here is that this is an attack to extract key information from a chip while it is running. If the chip is powered down, or if it doesn't have the key inside, there's no attack.

I don't take this to mean that the NSA can take a BitLocker-encrypted hard drive and recover the key. I do take it to mean that the NSA can perform a bunch of clever hacks on a BitLocker-encrypted hard drive while it is running. So I don't think this means that BitLocker is broken.

But who knows? We do know that the FBI pressured Microsoft to add a backdoor to BitLocker in 2005. I believe that was unsuccessful.

More than that, we don't know.

EDITED TO ADD (3/12): Starting with Windows 8, Microsoft removed the Elephant Diffuser from BitLocker. I see no reason to remove it other than to make the encryption weaker.

Posted on March 10, 2015 at 2:34 PM • 69 Comments

Comments

Spaceman SpiffMarch 10, 2015 2:59 PM

I don't know about breaking it, but subverting it? That's and entirely different (and more pernicious) story. Look what they did with Xcode!

Ken Thompson's evil, and sexier, secret twinMarch 10, 2015 3:09 PM

What's this Xcode issue you speak of, Spiff?

Dirk PraetMarch 10, 2015 3:10 PM

I have considered Bitlocker broken ever since Christopher Tarnovsky presented such a proof of concept with an Infineon TPM at Black Hat DC in 2010. His initial research work took him about six months. Subsequently retrieving the license key of an XBox 360 Infineon TPM apparently only required an additional six hours. According to Tarnovsky, the required lab equipment represents an investment of about $200,000.

ThomasMarch 10, 2015 3:19 PM

"Can the worlds most powerful spy agency break the security of a TPM provided by the lowest bidder?"

My guess would be "yes" (and if the answer is "no" then my comment would be "why not?").

Given physical access to a target system I'd expect them to be able to defeat a security measure primarily designed to provide a friendly green tick in a sales brochure.

JeremyMarch 10, 2015 3:32 PM

Three Ways to Acquire Encryption Keys

Elcomsoft Forensic Disk Decryptor needs the original encryption keys in order to access protected information stored in crypto containers. The encryption keys can be derived from hibernation files or memory dump files acquired while the encrypted volume was mounted. There are three ways available to acquire the original encryption keys:
By analyzing the hibernation file (if the PC being analyzed is turned off);
By analyzing a memory dump file *
By performing a FireWire attack ** (PC being analyzed must be running with encrypted volumes mounted).
* A memory dump of a running PC can be acquired with one of the readily available forensic tools such as MoonSols Windows Memory Toolkit
** A free tool launched on investigator’s PC is required to perform the FireWire attack (e.g. Inception)

flop_flopMarch 10, 2015 3:37 PM

I do take it to mean that the NSA can perform a bunch of clever hacks on a BitLocker-encrypted hard drive while it is running.

A point of clarification.

Last I read, the drive itself does not ship with a TPM. The TPM is essentially a board mounted smart card chip. So, you'd need both the drive and the mainboard to monitor voltage and eventually recover a key. Not a simple hack. This is one of the most difficult methods for recovering encrypted data.

Make a backdoor into the OS a part of a healthy purchasing contract. Intercept the data at the OS level. They'll do it.

DanielMarch 10, 2015 4:48 PM

To me, the single biggest aspect of the whole Dread Pirate Roberts/Silk Road situation was the concentrated efforts the FBI took to make sure they seized his laptop while it was still running. Methinks that if the FBI had an easy way to decrypt hard drives they would not have made sure they seized the laptop before they seized him.

The lesson here seems to be clear. Yes, it's pain to turn equipment on and off. But if the FBI/CIA is a threat vector for you...keep the equipment turned off as often as possible. It significantly reduces their attack surface.

Ken Thompson's evil, and sexier, secret twinMarch 10, 2015 4:51 PM

@Tom HOW COULD THIS HAPPEN, WHILE MY BLACK LANTERN OF lulz REMAINS SILENT?!

Oh, it's just got a broken mantle. Corps of Lesser Known Lanterns, re-disperse back to your dimensions of interest, false alarm. Mybad.

MattMarch 10, 2015 4:54 PM

The paragraph in context appears to be talking about extracting the TPM-stored key-material *while the device is on*.

Extracting the bitlocker key when the device is off would require an attack against AES itself or the ability to efficiently brute-force the PBKDF routine used to mix the bitlocker user-key and the TPM-key to recover the disk-encryption key.

Slime Mold with MustardMarch 10, 2015 5:08 PM

It's the first part of this post that really has my blood boiling. We all know that they're smelling about our business, but the fact that I pay for it twice is some sort of tragic-comedy.

In 1947, the CIA was created as result of congressional investigations that showed that the US had the data to forecast the attack on Pearl Harbor, but the various agencies were not sharing information.

In the 1960's The CIA set up its own SIGINT unit as it suspected the NSA was holding back.

In 2002, the US created the Directorate of National Intelligence: The info that might have prevented the September 11, 2001 attacks had not been distributed.

This pattern is not unique to government: I see the same damn thing in nearly every corporation I walk into. Every year, Christmas, Grey & Challenger release reports on stupid things like "New Years hangovers cost this much to the economy". Could someone calculate the cost of the "Territorial Imperative"?

anonymousEMarch 10, 2015 6:02 PM

More like:

Can the NSA ever secure a M$ product long enough against the FBI?

-Regards.

WaelMarch 10, 2015 6:04 PM

Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer
  1. TPM's were not designed to protect against class III adversaries with physical access -- they are a sub $1.00 component (for the descrete part!)
  2. Not all TPMs are created equal!

Notice the highlighted text: (I haven't read the whole article)
It does not say the keys were extracted by attacking the TPM itself. Drive Encryption Keys could be obtained by attacking a weak implantation in the BIOS, UEFI or other components. The TPM does not "work alone", if the "keys" are weakly "sealed" or have accompanying metadata stored outside the TPM... Likely, it's a "weak" implementation they are able to exploit.

Truecrypt_vs_BitlockerMarch 10, 2015 6:12 PM

Hi could it be that the whole Truecrypt joke about using Bitlocker, infact meens be very aware of the fact that Windows8 is dangerous, the ElephantDiffuser and UEFI and what not, it would be very intresting to see what the Truecrypt analyze reveals, meenwhile i stick to Linux and occationally W7

Cheers and thank you for a wonderful blog, it makes me happy to read it every day!

ImpMarch 10, 2015 6:58 PM

Who cares? Microsoft's OS has already NSA's backdoor in it.
Also, Windows 10 will be shipped with keylogger.

Use Dikcryptor or truecrtpt and stay away from bitlocker.

RolandMarch 10, 2015 7:08 PM

Bruce, I'd like to hear your take on the "Trusted Computing Initiative". This involves a co-processor (ARM on AMD, ARC on Intel) running code from who-knows-where apparently to enforce "Digital Right Management" for the MPAA/RIAA, and who-knows who else for purposes not necessarily in the interest of the person who owns the computer in question.

Dirk PraetMarch 10, 2015 8:08 PM

@ Imp

... Use Dikcryptor ...

Sounds like a line from a pr0n parody film on Citizen Four.

little tricksMarch 10, 2015 10:06 PM

https://support.microsoft.com/kb/2421599

This is a new prob, specifiacly to the firewire access to running system keys group policy setting that stops the loading of the firewire device drivers.

As of the last few months, the above problem presents itself, suddenly when it didn't do it before, a windows machine will now 'occasianly' load the firewire device.

Also note that setting the bios to not use the device is ignored.

WalksWithCrowsMarch 10, 2015 10:19 PM

@Slime Mold with Mustard

In 1947, the CIA was created as result of congressional investigations that showed that the US had the data to forecast the attack on Pearl Harbor, but the various agencies were not sharing information.

That was Hoover's doing. He captured a nazi agent with a microfiche file that pinpointed Pearl Harbor, if I recall.

@anonymousE

Can the NSA ever secure a M$ product long enough against the FBI?

Word on the street is the FBI actually doesn't have much vuln analysts working for them. Probably true considering that their last big intelligence bust was painfully "ho hum". Probably Anna Chapman (okay not so 'ho hum') knew she was targeted by them as soon as she got off the plane from the way she studiously did just about nothing.

I kind of don't think that the Sabu case and how it can out was exactly the apex of professionalism, either... :/

They are pretty damned good at some criminal cases these days, though.


konstMarch 10, 2015 10:23 PM

I don't believe you can get useful info from DPA to crack a good TPM chip. I know this comment doesn't add much but what I think is that they want everyone to think they can do it instead of actually being able to do it.

Jonathan WilsonMarch 11, 2015 12:26 AM

I wouldn't trust any encryption product invented by Microsoft for anything sensitive.

Clive RobinsonMarch 11, 2015 3:31 AM

@ AnonymousE, Walks With...,

Can the NSA ever secure a M$ product long enough against the FBI?

In this day and age I don't think you can buy a laptop that has not already been backdoored by some malware company in one way or another... and it's not just M$ products that have usability-v-security issues. Unfortunately the original "better than unix" design suffered from resource issues and security was an early sacrifice on the table as it almost always is. The problem now is several generations later whilst there are now the resources putting security back in is a Herculean task, thus the "bolt on not build in" path was followed... Bolting on is never an elegant or efficient process and thus "chinks in the armour" almost always result. Thus it's reasonable to suppose that there are many currently unknown security vulnerabilities in such products. In general it's difficult to defend against what you can neither see or understand. So against equally skilled attackers and defenders the attacker has the advantage of looking for one undefended vantage point.

So what of the relative skill levels,

The FBI have always tried to maintain the spector of being the worlds leading crime lab, but the simple fact is they were caught with their pants down with computer crime, and the NSA had thirty years or so clear advantage on them back then.

Various articles have pointed out in the past that the way the FBI goes about things is by psychological leverage or mind games, using obscure laws --which are so broad they can make being alive appear to be a crime--, to make people think they are never going to see the light of day again let alone their loved ones. If that does not work on a person then they go after the family and other loved ones untill they find someone it does work on. In short they go for mental torture to extract cooperation from those not mentally strong enough to resist it.

When you think about it, it's actually quite an unethical way of going about things, and in some areas a case of "The lunatics have taken over the asylum.". It's also the expected result of the "plea bargin" system when used by sociopaths on a results based career path, where failure is to be cast down or out, thus not an option.

They also want "show trials" to bolster their image, so if in ordinary life you are a celebrity of some kind then they will pull out the stops. Either to catch those who have made you a victim, or ensure you are found guilty of anything if you are a suspect.

Thus to avoid their attentions you need to be in effect a non-entity doing minor things to other non-entities in areas that don't have political light shining on them. And what ever you do don't make them look stupid publicaly that is the ultimate career killer for a Fed and they will seek revenge in any way they can to wipe the slate.

To be good at attacking computers requires almost the opposite of "people skills", and an insight into the how and the why of things, and the ability to methodically test for failings in complex systems. Usually such activities are "plowing the lonely furrow" which means a lack of human interaction, which usually means poorly developed human skills, which makes the person very susceptible to the mental torture techniques favourd by the Feds. It also makes the person vulnerable to emotional manipulation. Thus they are not going to make it through the door as traditional FBI employees, and those they had not particularly amenable to becoming skilled in that area, let alone suffer the five to ten years of training to get the skills.

Thus the Feds tried to solve their lack of computer chops quickly by targeting those with the skills and "turning them".

So the question is not so much what the NSA can do to stop the FBI, but what the NSA can do to stop the FBI getting their hands on the people who can break NSA secured systems...

Which brings up the question of just how well secured NSA systems are against those with appropriate abilities. Well the NSA started with thirty years or more of experience but a lot of it was in "big iron" in secure physical premises etc. The market changed rapidly from big iron to resource limited stand alone desktops in physically insecure places, for which they had to in effect start again. The changes continued to happen and even the NSA worked the "treadmill of pain" trying to keep up with the changes. Thus the thirty year advantage whittled away to next to nothing. It's only in recent years that PC computer resources have got sufficiently ahead that those thirty years of hard learned lessons on big iron can be transfered across... But the people who lived them are long retired and many of the new kids either don't know it or don't get it.

Thus due to the Ed Snowden revelations and how they came about, it suggests the NSA either can not secure their systems or don't have the resources to do it across all their systems...

Now the "can not" might not be due to a lack of relevant skills, it may be a reality of getting computers and users working effectivly in a given work environment.

And it appears that Ed Snowden actually was aware of this and to a certain extent exploited this asspect to do what he did.

So I would say that if the FBI could recruit someone with the right skills and get them into the NSA or one of it's contractors then the answer is "No" they could not secure any let alone just M$ systems against the FBI.

But then this is the way of espionage since before records were kept, and there are ways of mitigating such insiders via audit records and vigilance etc, but this tends to have a negative effect on the employees and the efficiency of their work... plus as Ed Snowden also showed there are ways to get around the mitigations, sufficiently so as not to raise alarms.

Thus the question falls to just how many people out there have the required technical and human skills, and what are they worth to get to work for you. Snowden and Mitnick might be able to tell you.

WmMarch 11, 2015 7:03 AM

We must not forget that Billy Boy agreed to give the Microsoft OS to the FBI to make their own modifications to the OS before it was distributed, so what is all the worry about Bit Locker? As a result, Intels certainly have other ways to control and get data from your computer. Then there is the completely separate operating system (chip) that runs on top of the main OS chip in Intel computers that probably gives Intel, and who knows who else, complete access to your data.

n00bMarch 11, 2015 8:33 AM

We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace.

I wonder what they mean by that. Is that implementation still out there?

65535March 11, 2015 9:08 AM

@ Matt

“The paragraph in context appears to be talking about extracting the TPM-stored key-material *while the device is on*.”

Yep, but how wide a problem is this?

It looks like the NSA can game Bitlocker – end of story.

The big question is:

Can the NSA game Microsoft’s updates – at will. My guess is probably. If bit locker is gamed than what about MS updates?

Aaron SpinkMarch 11, 2015 10:22 AM

@Roland

TCI is neither good nor evil. Best way to think about it is like a gun. Intrinsically there is no good or evil in a gun. Its all in how its used. TCI actually makes a lot of sense from a security standpoint at the top level or basic infrastructure point. The issues with TCI are all around implementation. The current flaws with TCI are related to the various closed source pieces of the majority of the software setting up TCI.

In order to have any actual security, you need a couple things...

Open sourced software with documented build procedures and build tools (so that you can verify what is running and verify that what is installed is actually what you want running).

Cryptographically signed/hashed distribution verified via hardcoded (aka static on chip read only firmware) (as verification of source).

Write once hardware based public keys and/or hardware based write disables for all writable firmware(so that you can actually control what is installed).

non-firmware based firmware read for all devices(and this is beyond critical, if you have to rely on firmware to read firmware then you can't ever verify firmware!).

And then at that point, the actual TCI infrastructure allows you to make sure that nothing has changed on you. Which is all TCI really really does.

65535March 11, 2015 10:46 AM

@ Slime Mold with Mustard

“…the first part of this post that really has my blood boiling. We all know that they're smelling about our business, but the fact that I pay for it twice is some sort of tragic-comedy…”-Slime Mold with Mustard

You have that correct.

But it is not a comedy.

It is a well choreographed subversion of our money. Those people that are not net tax payers’ benefit. Those that are net tax payers do not benefit!

I say the NSA and all of it tentacles should have a 35% across the board cut in there budget.

This appears to be the only way of stopping them. Hit them where it hurts – in the wallet.

flop_flopMarch 11, 2015 3:05 PM

Whatever they end up rebranding the Trusted Platform Initiative as, it's evil.

It's purpose is to lock the end user out of the hardware creating a locked box out of any general purpose computer. Unless you are the NSA, or FBI, or....

Of course no one in the industry is willing to admit that because of the money and powers involved, but it's evil.

Stephen StreetMarch 11, 2015 5:13 PM

What does everyone think for the follow sentence in the "source" document:

"We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace."

This seem like a BIG problem. I wonder what the words "single trace" mean and imply....

Fixing windows update nsaMarch 11, 2015 5:32 PM

The public has bought and chowed doubly down on the Iranian centrifuge NSA hack, believing incorrectly that a hacked usb was used to introduce the virus. When in fact, Microsoft Update was impostered. Impossible? Read back on the security bullitens when the Iranian hack became public. Notice a complete reworking of the WU keys, etc. MS was NOT happy by any means. So yes, they can (or at least could at will) impersonate Windows Update.

ThothMarch 11, 2015 10:10 PM

There is so much that can go wrong if you are going to look at Bitlocker or even Truecrypt or any encryption by itself.

1.) Assume running on COTS desktop, laptops, tablets, phones, workstations and normal computing servers and stuff. They ARE NOT DESIGNED TO BE SECURE !!!

That brings about the whole conversation between me, Nick P, Wael, Clive Robinson and in the past, RobertT on security of the system itself. Before you go to software or OS layer, the hardware MUST BE SECURE ! This includes discussions of Castles, Prisons and the likes.

2.) The use of a TPM chip or secure execution like ARM Trustzone (which integrates in ARM processors). TPM or Security chips actually can be mounted as a separate chip or as integrated within the processor. A separated chip has the disadvantage of being swapped out more easily and a "mismatch" between the known state of the processor and the known state of the TPM chip makes it less secure. An integrated chip like ARM Trustzone chips build a security layer into the processor so it can be used for secure purposes and insecure purposes. Such integration makes the known state more cohesive and less prone to spoofing or mismatch of state.

Regardless of the secure chips you use, I theorize in past posts that without battery backed security chips, you are less likely to be secured unless you use a Silicon Physically Unclonable Function (SPUF) proposes in the AEGIS security processor.

The reason battery-backed security chips are necessary is to prevent offline attacks on the chip which will bypass the security circuits. Of course there are online attacks like clock resets, power voltage attacks, glitch attacks and such but they need to proceed carefully on a well protected battery-backed security chip to prevent triggering tamper reactions.

The most ideal battery-back security chip would be to load all keys in transient memory states so that removable of battery/power would effectively render all keys destroyed but that is not the case for most security processors (which includes HSM, smartcard, tokens
... etc...). To what I have noticed, during power off, the master key and critical keys are saved into a limited set of flash memory in the security processor within the security encapsulation which is not ideal at all. The reason for doing so is when power is reappled, the user can skip the steps of reloading the keys to offer convenience but this will downgrade security by a huge margin.

3.) I have shown in one of my past posts how to do your home-made FIPS level device and showed that FIPS 140-2 pub is pretty pointless from a security standpoint of cryptographic device and key protection.

Let's pick at FIPS 140-2 Level 3. So FIPS 1402-2 Level 3 requires that you have tamper evident and some tamper resilience besides not allowing keys to be exportable. The basic tamper resilience most vendors will go for is using a security mesh circuit which is tightly packed copper wires set to a tamper detection circuit so that cutting the circuit to access the security contents below it would theoretically trip the electronic tamper and causes the keys to be zeroize. Ion beam workstations could be used to work on the copper wires to make a precision cut and allow probes be inserted (if the tamper wire circuit have considerable space and the ion beam can be adjusted with precision). To fulfill the tamper evident, you simply "pot" the security chip in epoxy resin so it stands out. Any scratches on the epoxy resin "protection" shows tamper. You can easily mimick these settings to make your home-made FIPS 140-2 Level 3 chips by doing the above.

It doesn't provide that much tamper protection though.

You have to consider electronic non-invasive probes on the power lines and data lines and clock lines which the attacker would do non-physical probes by glitching those lines.

All these details can be found in Ross Anderson's and Markus G. Kuhn's research on tampering with security chips.

4.) Sleeping with the Feds in bed .... we know they are doing something. Needless to explain in detail. Bitlocker or not, those chips and software out there are mostly backdoored anyway.

5.) Security programming concepts, provable high assurance ... I don't think they took that extend of thought when developing those crypto products. If they want high assurance, they should just load the Bitlocker critical codes and keys not in a running insecure machine but in a properly built and designed security processor or secure setup of sorts. A truely highly assured design can be very expensive and very bulky and tedious to use.

6.) Closed source setup are one of the big culprits and it's rare for a truely open hardware and software security setup. You don't know what's inside that smartcard, security token, TPM or HSM.

7.) Overall, the closed source nature, the almost ad-hoc security setup and such low or none assured nature makes it so hard to truely secure but the worst is the humans in the chain of decision making that refuses to acknowledge and change.

8.) Finally, the only security left is for us to pretty much do our own setup and OPSEC. The industry is not responding and always deceitful to the concerns of security and under the pressures of tyrannical Govts, Powers That Be and Agencies.

9.) Preventing these tyrannical lots from imposing restrictions on free development, free use and posessions, free distribution and free research of security topics must be carried out on a political basis as well otherwise if it is enforced, it will spell doom on privacy and security of the masses in the digital space.

MattMarch 11, 2015 11:45 PM

@daniel: More paranoid people would say that they only did that so that they would have a plausible explanation for how they got it. Even if they could decrypt a powered down full-disk encryption setup, they wouldn't want that to be publicly known. I suspect they don't have that break, but you have to consider that they might.

MattMarch 11, 2015 11:46 PM

@wael: I can't find what a "class III adversary" is. Could you please link me to a definition?

WaelMarch 11, 2015 11:48 PM

@Thoth,

Castles? Prisons? TPMs? Trust Zone? You, Clive Robinson, and Nick P in one post? Are you nuts? I can't bear the excitement! You almost gave me a heart attack! Good thing you forgot @Figureitout and a couple of cups of tea, otherwise I would have been a goner by now!

@Nick P, @Clive Robinson,

@Nick P: You hate the C-v-P "so called analogy, which it's not" because your model is not a castle, but a combination. Are you good to continue the discussion? Let's talk about trust boundaries and what happens when they are crossed in your "not-so-pure-castle" model.

Aaron SpinkMarch 11, 2015 11:58 PM

@flip flop

Having worked with/in the group that eventually did the majority of the work for TCI/Secure boot, your characterization is completely flawed.

Its purpose has never been to lock the end user out of the hardware. Its purpose has been to lock *down* the software so that you are running a known configuration. Its a basic requirement for any secure environment to know what you are actually running.

Yes, that can be used for evil but can also be used for good. It is a specification, it has no intent.

WaelMarch 12, 2015 12:26 AM

@Matt,

Here is a blog reference and a related external one.
I can't find the exact link at the moment, but the classification went something like this:

Class I: Script Kiddies, amateurs with expertise in a few related domains, maybe able to exploit some weaknesses once they are published.

Class II: knowledgable insiders that can not only exploit weaknesses, but also can induce some weaknesses. They have access to expensive specialized equipment and access to needed domain knowledge, and typically work in teams. Examples: Some Black hats

Class III: Major governments or Major research organizations with lab environments with virtually unlimited funds and highly skilled domain experts. Can also utilize teams of Class II attackers. The law is also on their side. They can get the information they need by any means possible.

I think this was an IBM classification. Too sleepy to find it, and I think I may have botched the description a little.

WaelMarch 12, 2015 12:52 AM

@Matt,

Two things:
1) I botched it a little
2) Three levels may have been ok in the past, but no longer adequate

I'll update it when I am more coherent. "By any means possible" should have been "By any means necessary".

WaelMarch 12, 2015 12:59 AM

@Aaron Spink,

Yes, that can be used for evil but can also be used for good. It is a specification, it has no intent

The Alfred Nobel phenomena, so to speak!

ThothMarch 12, 2015 2:19 AM

@Wael
Do you have information on what are the normal security encapsulations like tamper traps and sensors on a TPM chip ?

WaelMarch 12, 2015 7:26 AM

@Thoth,

Do you have information on what are the normal security encapsulations like tamper traps and sensors on a TPM chip ?

I don't know if there is a "norm". Some have over 50 measures of protection such as noise injection, active shielding, temperature and voltage out of nominal bounds checks and protection, sensors that reset the TPM if chemical etching is detected, etc... They don't make all the methods publicly known, but you can still search! You can compare it to smart card protection...

BrianMarch 12, 2015 10:12 AM

I was just wondering: what about a BitLocker-encrypted Flash drive? Could this method of targeted attack exploit a similar vulnerability in a flash drive that has been stolen (or found on the sidewalk)?

Peter TutorMarch 12, 2015 11:12 AM

"But who knows? We do know that the FBI pressured Microsoft into adding a backdoor in BitLocker in 2005. I believe that was unsuccessful."

I guess after 10 years they could find some way...

AlexMarch 12, 2015 1:10 PM

Microsoft pushes Bitlocker... given past history, I'm reasonably sure there is some sort of gov't backdoor in it.

Nick PMarch 12, 2015 7:39 PM

@ Wael

I thought you wanted a framework for everything lol. Anyway, we can but in the near future. My job's drained my mind and energy quite a bit for past few days.

Ann OMarch 12, 2015 10:25 PM

Daniel: "the single biggest aspect of the whole Dread Pirate Roberts/Silk Road situation was the concentrated efforts the FBI took to make sure they seized his laptop while it was still running. Methinks that if the FBI had an easy way to decrypt hard drives they would not have made sure they seized the laptop before they seized him"

Your conclusion is flawed. If they can break disk crypto offline, they wouldn't want to reveal that capability in public court documents. Seizing the computer while it's running is a good way to make the evidence from the disk admissible in court without needing to do parallel construction.

WaelMarch 12, 2015 11:14 PM

@Nick P,

I thought you wanted a framework for everything lol.

We all have our weaknesses... Don't work too hard!

WaelMarch 13, 2015 12:34 AM

@Matt,

Here is a better link

I think this should be more granular:

Class 0: Script kiddie

  • Possesses basic skills
  • Can find and run scripts
  • Can modify scripts for a different purpose
Class I:Clever outsider
  • Can find weaknesses
  • Expert in a needed subject matter
  • Can develop scripts that Class 0 group members use
  • Has access to some basic tools; Compilers, SW debuggers, sniffers, etc...
Class II: Knowledgeable insider
  • A subject matter expert in more than one required domain; Cryptography, Operating systems, Hardware, etc...
  • Has access to sophisticated equipment; logic analyzers, spectrum analyzers, test equipment, Hardware debuggers,...
  • Has access to internal knowledge that's not available to an outsider
  • Has access to subject matter experts in needed areas
  • Can induce weaknesses; fault injections, out of nominal spec operations
  • May work as part of a team
Class III: Funded Organization
  • Has access to the most sophisticated tools
  • Has access to teams of subject matter experts in all needed domains
  • Able to allocate the needed funds and resources to accomplish a task
  • Can recruit or coerce Class II attackers
  • If it's a state actor, then technology is not the only weapon at their possession

Z.LozinskiMarch 13, 2015 12:02 PM

@Matt, @Wael,

The origin of the definition of Class I, Class II and Class III attackers was the from the security analysis of the High Performance Shield (HSM in today's parlance). HPS was developed as part of IBM's Common Cryptographic Architecture for the financial industry in the late 1980s to secure ATM systems. HPS was the secure environment for all crypto services. For examples of HPS think of the IBM 4755 Crypto Adapter, the IBM 4758 Crypto Coprocessor and its friends.

One of the key design decisions was which classes of attacker does the system have to be secure against?

At the time (1989) the definitions were:

"Class I (clever outsiders) - They are often very intelligent but may have insufficient knowledge of the system. They may have access to only moderately sophisticated equipment. They often try to take advantage of an existing weakness in the system, rather than try to create one.

Class II (knowledgeable insiders) - They have substantial specialized technical education and experience. They have varying degrees of understanding of parts of the system but potential access to most of it. They often have access to highly sophisticated tools and instruments for analysis.

Class III (funded organizations) - They are able to assemble teams of specialists with related and complementary skills backed by great funding resources. They are capable of in-depth analysis of the system, designing sophisticated attacks, and using the most sophisticated tools. They may use Class II adversaries as part of the attack team."

Class 0 (Script Kiddies) didn't exist at the time (though the 1987 CHRISTMA EXEC had started to show what was possible a couple of years previously).

Equally interesting for the Bruce's readers, there is a section on a 5 phase design methodology for highly secure systems.

i) understand the environment where the system will be used, and detail what needs to be protected
ii) consider known protection methods and attacks, define attack scenarios, identify what needs to be protetcted, identify potential weak points
iii) tentative design, prototype, characterise effectiveness
iv) develop reliable manufacturable hardware from physical prototypes
v) evaluation including characterization, analysis and attack testing

And a section on attack methods:

- microcircuit attacks "aimed at the hardware components where sensitive data are stored"
- counterfeiting and hardware simulation
- eavesdropping

It's deja vu all over again!

There is a published paper if you have access to a university library or IEEE Xplore.

Abraham D.G., Dolan G.M., Double G.P., and Stevens J.V. (1991) Transaction security system. IBM Systems Journal, 30(2):206–228.

WaelMarch 13, 2015 10:10 PM

@Z.Lozinsk,

Thanks! That's the reference I was looking for! Was good back then, but things changed a little since then.

MattMarch 15, 2015 6:12 AM

@Wael, @Z.Lozinsk: Thanks for the info. They make sense, I didn't know they had been classified though.

Against Class III adversaries, my strategy is to give up. They will win and life is too short.

WaelMarch 15, 2015 12:21 PM

@Matt, @Z.Lozinski,

Wise choice! You can't bring a small knife into a gun (and artillery) fight and expect to win.

DrPizzaMarch 19, 2015 11:26 AM

The official reason for removing Elephant is that hardware doesn't support it. By using (IIRC) straight AES-128, BitLocker can defer to the crypto hardware in various encrypted hard disks. Windows could already use straight AES-128 in some scenarios, I think, such as if FIPS compliant mode was used.

While I feel that this would arguably justify a change of default, it doesn't really justify the removal of the option. Especially as Windows still understands Elephant, since it can still read and write Elephant disks. It's just no longer capable of using it on a newly-encrypted volume.

ACDMarch 27, 2015 8:05 PM

The article mostly talks about attacking the TPM. What about computers without one, that use a complex password required at boot to secure the encrypted files?

JeffMarch 28, 2015 12:39 PM

@ACD

I agree - that is a great point. And what about BitLocker-encrypted flash drives? How can these methods obtain encryption keys for such devices?

TApril 12, 2015 10:17 PM

The notion of "deferring to the crypto hardware on hard disks" diesn't make sense: they all use XTS mode, not CBC.

waynebaalMay 9, 2015 7:54 AM

Sorry, but I just can not take seriously the opinions of people who use the terms M$ and Crapple!

rattrap22June 12, 2015 11:59 AM

its Microsoft they built in back doors for the NSA for every product they make.the french switched from windows to linux because "we are tired of the American government knowing our secrets before we do!"Get a grip we are locked up tighter than Hitler could have ever dreamed.read the telecommunications act of 1996.

note dwardsnow denMarch 23, 2016 3:40 PM

dear people,

watch https://www.youtube.com/watch?v=L6Hip_eX72c
listen carefully from timestamp 09:50
ask yourself
"am I the first person that came up with this encryption algorythm?"
"could any other person have come up with this encryption before I did?"
"could they have kept it a secret"
"could they now keep it a secret to prevent that we would know this encryption is allready cracked?"

don't hate me cause I laugh you the fuck out though. don't do evil!!!

JackNovember 17, 2016 4:43 PM

Forget about MS Bitlocker it *DOES* have a backdoor that allows Gov't and LE to bypass security and decrypt your files. I don't recall the specifics, but I have seen the ppt presentation MS created for Gov't and LE organizations describing step by step how to access bitlocker protected data (this was for 98/XP systems, can't say if the backdoor still exists in 7/8/10). If security is that important to you, dump MS and use a LUKS encrypted Linux system with the LUKS header stored on a seperate device like a MicroSD card or something easy to lose, hide or destroy. Without the LUKS header, it would likely take the NSA many tens or even hundreds of years for it's cryptanalysis to make any heads or tails of your encrypted data.

Clive RobinsonNovember 17, 2016 5:40 PM

@ Jack,

use a LUKS encrypted Linux system with the LUKS header stored on a seperate device like a MicroSD card or something easy to lose, hide or destroy.

Just don't use Ubuntu, unless you've realy realy made sure you've backed it up...

To quote the FAQ,

    In particular the Ubuntu installer seems to be quite willing to kill LUKS containers in several different ways. Those responsible at Ubuntu seem not to care very much (it is very easy to recognize a LUKS container), so treat the process of installing Ubuntu as a severe hazard to any LUKS container you may have.

It's not just installing you need to take care but sometimes upgrading as well...

But then Ubuntu has done an "ET" and phoned home with "Telemetry" etc so best to just bin it and it's derivatives.

CSAugust 23, 2017 5:35 AM

@Jack:

"describing step by step how to access bitlocker protected data (this was for 98/XP systems, can't say if the backdoor still exists in 7/8/10)."

98/XP does not and never had Bitlocker. Bitlocker started from Vista onward.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.