FIPS 140-2 Level 2 Certified USB Memory Stick Cracked

Kind of a dumb mistake:

The USB drives in question encrypt the stored data via the practically uncrackable AES 256-bit hardware encryption system. Therefore, the main point of attack for accessing the plain text data stored on the drive is the password entry mechanism. When analysing the relevant Windows program, the SySS security experts found a rather blatant flaw that has quite obviously slipped through testers’ nets. During a successful authorisation procedure the program will, irrespective of the password, always send the same character string to the drive after performing various crypto operations—and this is the case for all USB Flash drives of this type.

Cracking the drives is therefore quite simple. The SySS experts wrote a small tool for the active password entry program’s RAM which always made sure that the appropriate string was sent to the drive, irrespective of the password entered and as a result gained immediate access to all the data on the drive. The vulnerable devices include the Kingston DataTraveler BlackBox, the SanDisk Cruzer Enterprise FIPS Edition and the Verbatim Corporate Secure FIPS Edition.

Nice piece of analysis work.

The article goes on to question the value of the FIPS certification:

The real question, however, remains unanswered ­ how could USB Flash drives that exhibit such a serious security hole be given one of the highest certificates for crypto devices? Even more importantly, perhaps ­ what is the value of a certification that fails to detect such holes?

The problem is that no one really understands what a FIPS 140-2 certification means. Instead, they think something like: “This crypto thingy is certified, so it must be secure.” In fact, FIPS 140-2 Level 2 certification only means that certain good algorithms are used, and that there is some level of tamper resistance and tamper evidence. Marketing departments of security take advantage of this confusion—it’s not only FIPS 140, it’s all the security standards—and encourage their customers to equate conformance to the standard with security.

So when that equivalence is demonstrated to be false, people are surprised.

Posted on January 8, 2010 at 7:24 AM102 Comments

Comments

Ricky Bobby January 8, 2010 7:28 AM

There are multiple ways encryption has been broken in the past, including laptop encryption by cooling the ram.

However, I still believe these encryption techniques are still valid 95% of the time. Because, not every company has the risk of someone with this kind of talent to be cracking their software. How many people in the World could implement this crack? I would not buy it now with other choices available, but it still works for most situations.

xxx January 8, 2010 7:39 AM

@Ricky Bobby:

Two fallacies:

  1. It only takes one person to crack it and make a tool available. The rest of us can just, you know, download the tool and use it, without understanding the details.

  2. Unlike vulnerabilities that you mention, this is rather blatant bug: the password IS NOT USED for encryption.

HJohn January 8, 2010 7:49 AM

And as usual, it wasn’t the encryption itself that was cracked, it was weaknesses in the implementation that were exploited.

John January 8, 2010 8:04 AM

My main question is how did the same flaw appear in the product of 3 different companies? Is this because all three are outsourcing to the same 4th company? Or is it that they’re following an industry standard? Or has someone who’s involved in the design been moving from company to company? But yes, it’s a really silly and stupid mistake.

vedaal January 8, 2010 8:05 AM

“The article goes on to question the value of the FIPS certification”

maybe FIPS should re-check their certification/recommendation for asymmetric key lengths:

here is a Lenstra paper demonstrating factoring of RSA 768 bit, with predictions that 1024 bit RSA should fall relatively soon

http://eprint.iacr.org/2010/006

D January 8, 2010 8:21 AM

@vedaal:

You missed the point. These devices passed certification for their use of an algorithm, but it’s not that algorithm that was broken. What broke (and is not addressed by the certification AT ALL) is the their not using the password to generate the key.

E January 8, 2010 8:36 AM

@D
It would not be a wise decision to use the password as part of the encryption key. In that case you’ll have to de-encrypt with the old key and re-encrypt the data on the device with the new key.

Ricky Bobby January 8, 2010 8:38 AM

@ billswift

Agreed

The risk is minimal. Not many security experts or others who have the knowledge are stealing these items. It is generally people trying to make a quick buck. There is a risk, but a risk i can accept if I already had this product in my line of work. (is not govt)

Same with the laptop encryption hack, what are in the typical thieve will steal a laptop and freezes your ram, and breaks the encyption?

hnn January 8, 2010 8:45 AM

It’s the same with safety standards. (Such as EN 61508 that I’ve had the misfortune to work with.) Nobody understands them. Few actually follow them even though they claim to (not that it would matter much if they did). Everyone think “SIL 3” in the brochure makes everything magically safe. Probable does more harm than good.

Nik January 8, 2010 8:46 AM

Lexar jump drive V1 even stored the password on the drive – xor’d with a fixed string.

Normal users just can’t evaluate how secure something really is – that’s why security analysts and publishing their results is essential

Dave January 8, 2010 8:53 AM

Please stop saying there is nothing wrong with the encryption, it’s just the implementation that’s wrong. That’s like saying they were perfectly healthy, except their heart stopped beating.

ALL implementations of encryption are problematic.

And those who say the risk is low and they’re willing to take the risk, really, you take the risk because you don’t have a practical choice. People operate vehicles all the time that aren’t safe, they just don’t want to hear about it. Are you saying that if something better came along you wouldn’t begin using it?

Vernon January 8, 2010 8:56 AM

This is absolutely terrible from reading the linked report. Encrypting using the same string to encrypt rather than the password entered – that’s worse than incompetence, one could argue that’s broken by design.

Moreover, Sandisk’s and Verbatim’s response is insufficient. It’s fine if people do rewrite data using the patched software, but the problem is that already written data is vulnerable, and many might not bother to rewrite the data. At least Kingston’s response clearly indicates that the system is faulty and is properly fixing it.

Tordr January 8, 2010 8:59 AM

@Rick Bobby

The risk for the ordinary person is minimal. A typical thieve will not steal a laptop, freeze the ram and break the encryption, but an ordinary person does not need a laptop with encryption that is certified to be secure.

The person that needs certified secure laptops and FIPS Level 2 Certified USBs on the other hand is the same person that might be targeted by a thieve that will break the encryption. For that person this break is important.

Now why do I care, I do not need that kind of security for myself? Well, the sensitive information on the certified secure laptop might be for example social security numbers, financial data or credit card numbers, some of which are connected to me.

Patrick G. January 8, 2010 8:59 AM

@RickyBobby

Your USB Stick (one of the 3 brands) is stolen/lost and anyone with a bit of knowledge and motivation can get your data off the stick.
Without your PC, without your password, without breaking the or any encryption.

That’s 120 Dollars extra for less security than any compression tool with “password protection” has to offer.

Minimal indeed…

HJohn January 8, 2010 9:06 AM

@Dave: “Please stop saying there is nothing wrong with the encryption, it’s just the implementation that’s wrong. That’s like saying they were perfectly healthy, except their heart stopped beating. ”


Your analogy is good, but your conclusion is wrong. The analogy is good because one flawed vital organ will kill the whole organism. The conclusion is wrong because that doesn’t mean there is anything wrong with the other organs.

The encryption wasn’t cracked, the implementation was exploited. That’s the fact. Keep the same encryption, fix the implementation.

Your otherwise good analogy has been incorrectly concluded to basically say we could never donate organs, after all we can’t say there is nothing wrong with the kidney because the person died when their heart stopped beating.

Ricky Bobby January 8, 2010 9:17 AM

@ Patrick G

It is awful don’t get me wrong, but the risk of this hack getting implemented is very minimal in most areas.

Its the typical “movie plot” that everyone here loves to say. Saying that everyone that steals laptops or usb sticks has the capability/knowledge to hack encryption to get the data. It just doesn’t occur 99% of the time. At the govt level it is a risk that should not exist, but other places its minimal.

Ricky Bobby January 8, 2010 9:22 AM

At the same time, if I just bought these and found this out I would be extremely unhappy too.

Bryan January 8, 2010 9:34 AM

@HJohn

What’s your point? If the data is compromised it doesn’t matter how the attacker got it.

The only difference I can see is:

If the implementation is broken, only devices implemented that way are vulnerable.

If the encryption technology or algorithm is broken, only devices using that technology or algorithm are vulnerable.

The latter being a potentially larger group.

Since the headline of Bruce’s post was specific to the device, I don’t understand the significance of your comment.

John January 8, 2010 9:39 AM

@E … It would not be a wise decision to use the password as part of the encryption key.

Actually, the wise decision would be to have a random master key to encrypt the data on the drive and to store the master key on the drive after encryption by a key generated from the password. Effectively, you use the password to obtain the key used to encrypt the master key.

So changing the password is quick and simple. Yet, without the password, you can’t get the master encryption key.

The actual mention of encryption in the article is a red herring. What they effectively had was a system where a program running on the computer simply told the USB drive, “Yep. This person knows the password, so it’s OK for you to unlock yourself”. Any mention of encryption after that point is meaningless.

Jack January 8, 2010 9:40 AM

@ vernon

Think of this. If Kingston is recalling the drives, you might have a situation where the data on the drives they are sending back is exposed….better apply a patch and fix it. This patch fixes the bug. This stuff happens all the time in this business. No need to get all worked up. As long as the vendors react on time (which they did), it’s perfectly acceptable to me.

HJohn January 8, 2010 9:41 AM

@Bryan: What’s your point? If the data is compromised it doesn’t matter how the attacker got it. Since the headline of Bruce’s post was specific to the device, I don’t understand the significance of your comment.


I agree to a point (it doesn’t matter how he got it, it is vulnerable).

The point of my comment is that in terms of defense and resolution, understanding what really happened is essential. Without that understanding, someone may mistakenly think that the encryption itself is flawed and what is in need of correction, which is not the case. It is how it was implemented.

I’m sorry you can’t see the obvious significance of that.

HJohn January 8, 2010 9:51 AM

@Bryan at January 8, 2010 9:34 AM

I really should leave it at my last comment, but your logic really got under my skin. (I think you’ve confronted me under another name before, to be honest)

There is nothing off base about talking about the details of what happened and why. Would be a pretty useless blog if people were only able to post in general about the exact headline.

John January 8, 2010 9:54 AM

Complex security systems occasionally have unexpected and embarrassing failures, regardless of how well they were designed and tested. This is why ongoing peer review is such an important part of today’s security matrix.

It is also why we read these posts – to learn (hopefully) from the mistakes of other and try to improve and/or fix our own products, services, procedures and even user education.

After all, to err is human. All three vendors have acted quickly and proactively to communicate their security bulletins and issue corresponding fixes (links in the original article). These updates are thankfully rare. Kudos to them.

I’m more worried about those vendors that never communicate security issues, never update their firmware or software, and never admit to making a mistake.

Dave January 8, 2010 10:03 AM

@HJohn – It is commonly said one-time pads represent perfect security. The problem is they are very impractical and there is no way around it. Where does the one-time pad come from, and how does the far end party get YOUR one-time pad to XOR the cipher text? Where do you store the one-time pad, since it is equally sensitive? And it’s as big as the original plaintext! And don’t ever reuse one the way Soviets did after WWII and got busted.

The point is, there is no way around the impracticalities. You can repeat it a thousand times: that a gaussian source XOR’d with plaintext cannot be broken. So what? You can’t escape the impracticalities of it.

From your posts, I suspect you are just arguing for the sake of arguing. But I am willing to expound for other reasons. And I am in good company when I make this point.

There are inescapable impracticalities when using encryption. Yes, there are better implementations and there are worse implementations – I guess this is your only point, really. So, what. You cannot escape the problems with key management, key generation, key storage, and how do you share files encrypted with one key on your machine with someone distant? Do you use layers of encryption with multiple keys? Ouch. And if the key is based on your password, then it is no better than your password. No matter what the algorithm is, you are essentially exchanging the sensitivity of your plaintext with the sensitivity of the key. And there are many other serious problems with encryption implementations.

Don’t take it personally.

President Scroob January 8, 2010 10:09 AM

… and the string was ‘12345’, the same combination I have on my luggage.

Also, John is right, the correct way to handle this stuff is:

password verified against a hash.
password used to decrypt key.
key used to decrypt data.

Since the password is entered in software and the key done in hardware, I would also suggest that the log-in program should send the password to the drive encrypted with either shared secret (more hackable) or public key (more secure) encryption, not in plain text.

jgreco January 8, 2010 10:17 AM

@John

“Complex security systems occasionally have unexpected and embarrassing failures, regardless of how well they were designed and tested. This is why ongoing peer review is such an important part of today’s security matrix.”

The issue here is the complete and absolute lack of any meaningful testing by the vendors. This design was so braindead, that I’d like to chalk it up to malice because it pains me to think someone could ever be this dumb.

I must say, I am hesitant to hand out kudos to those who fail so dramatically the first time around, then admit their after someone else points it out. Seems like that is setting the bar a tad low.

HJohn January 8, 2010 10:18 AM

@Dave: From your posts, I suspect you are just arguing for the sake of arguing. But I am willing to expound for other reasons. And I am in good company when I make this point.


That’s certainly not the case, and I apologize if I sound that way.

I did, in fact, like your analogy. I just disagreed with the conclusion, no malice or offense was intended. It really is my point that one could misunderstand why this happened and, as such, implement the wrong solution.

jgreco January 8, 2010 10:26 AM

@Dave at January 8, 2010 10:03 AM

Nobody here is arguing that absolute security is practical, or even exists in the real world.

However, suggesting that the encryption used here is somehow at all responsible is patently absurd. Because of the way the device was implemented, encryption wasn’t even being used for all intents and purposes.

This is a textbook example of an implementation issue, not an encryption issue. You are not doing anybody any good by clouding this fact.

Till January 8, 2010 10:49 AM

One-time pads work fantastically in certain use cases. Every bank in Germany uses TANs; to do online transactions, they send you a list of 6-digit numbers in a secure envelope via snail mail. For each transaction, you must enter one of these numbers. When you’re running low, they send you another.

It’s a bit of a pain in the ass, but it also means anyone who manages to steal your password can only look at your bank account, not touch anything. It’s a wonderfully simple, low-tech, effective use of one-time passwords.

Obviously not every encryption scheme can be wedged into every situation. But that hardly needs to be stated.

Bob January 8, 2010 11:01 AM

FIPS is a joke that means nothing about real-world security.

Last time I looked, the highest-level FIPS standard for operating systems assumed that the system being tested was being run on an isolated network, where the only people around were white hats with normal user privileges. So what they were really testing was: “If everybody who has access to the keyboard or the network has good intentions, and doesn’t have an admin password, how is it hard to accidentally screw the system up?”

It’s a telling point that ancient, notoriously malware-prone versions of Windows are FIPS-certified, while demonstrably secure versions of Mac OS X, Linux, and *BSD aren’t – mostly because Microsoft was willing to pay a lab to slap their FIPS rubber stamp down, while Apple etc. weren’t.

So that’s why our government and business world runs its security-critical infrastructure on the most insecure operating systems on the market. When they get hacked, anybody called to account can bleat, “Oh, but it was FIPS certified!”

Of course, there’s been discussion of higher FIPS levels that actually mean something realistic, but the last I checked, they hadn’t gone anywhere. I’m not sure whether the biggest contributing factor is that the people charged with defining test protocols can’t figure out how to do anything beyond the trivial, or that vendors object strenuously to any raising of the bar.

HJohn January 8, 2010 11:11 AM

@Bob at January 8, 2010 11:01 AM

I’ve long been of the opinion that the government would be better of if the:
1) got out of the business of legislating/promoting standards and methodologies; and
2) instead, legislate the penalties for certain types of disclosures.

Make it of concern to the business through the penalties/sanctions/laws/etc. and let people that really know what they are doing find the solutions.

Sort of like speed limits. They haven’t passed laws mandating that everyone use cruise control, or have warning alarms when exceeding a certain speed, or that we even must have an accurate speedometer. They just give us tickets if we are in excessive violations of the speed limit and let us figure out how to comply with the law.

I know it isn’t as simple as speed limits, but the point is still correct, IMHO.

Nobody January 8, 2010 11:12 AM

@bob
The FIPS (Orange book C2) that NT famously passed was even worse than that. It doesn’t require that an intrusion was prevented just that certain attempts were logged.
One famous OS that passed C2 didn’t even have a way to extract the logs – apparently C2 doesn’t require that the logs can be read, only that they are created.

HJohn January 8, 2010 11:17 AM

@Nobody: “apparently C2 doesn’t require that the logs can be read, only that they are created.”


I know an auditor that once was tasked to verify an entity routed all their traffic through an IDS, in accordance with State law. He found an IDS box, turned off, with holes drilled on the side of it and the cable passing through…the entity explained that “we couldn’t get it to work, but the law just says it has to pass through the IDS–it doesn’t say it has to be turned on.”

We got a good laugh, but those responsible were clearly inept.

Dave January 8, 2010 11:27 AM

I think it’s a mistake to assume “OK, so someone found a vulnerability here, good work. But they’ll fix it and then we’ll be OK, right?” We will never be “OK”. Hardware and operating systems, etc are always changing. And everytime someone moves the furniture it introduces new vulnerabilities.

Imagine a company designed a car, but if you hit the wrong button at the wrong time for whatever reason, the steering would lock up on the next turn. Then imagine everytime someone got killed the manufacturer responded “Oh, another idiot. Well, it’s not our fault. We warned them not to even brush up against that button at the wrong time. It’s their fault.”

People will always say “it’s just a bad implementation”. But the difficulty in NOT implementing good security is inherent in the way encryption works. I’m not clouding anything. There are many glaring problems with encryption no one even points out, because that is all anyone knows. It’s often overlooked that once you encrypt a file, that ciphertext is tied to the key used, forever. You might lose the ciphertext and say to yourself “it’s OK, it’s encrypted.” But someone can keep your ciphertext as long as they want and even get the key months later from a completely different incident. The algorithm isn’t a secret. If you remember later “oops, I encrypted a WinZip file”, too bad. Those weaknesses are in there and you can’t take them back. You can’t stop someone from decrypting the ciphertext and you may never know if they did or not. And today, whole drives are being encrypted with a single password/key. Yikes!

The security isn’t like ripstop nylon, it’s more like the side of a potato chip bag. Once the tear starts it races down the side of the bag. If someone gets that key, who cares HOW they do it, they get EVERYTHING. And it doesn’t stop there. They can then use the drive to perhaps penetrate an entire network. Do you think you will know about it first, when it happens? You may never know. Many companies have been breached, but don’t even know it. What should we say then – “oh, it’s just another bad implementation?”

People need to work and they need tools to do so. They need to access files and they also need to keep information away from adversaries. That’s what people want and need. They don’t need to worry “uh…wonder where this thing is storing my keys?” They don’t need to worry if their keystrokes are being sniffed, because all the security of all their files and maybe the integrity of their entire business depends on one little bitty string “johnny_ran_home_88”. And they don’t need people telling them everyday “don’t forget to choose a ten mile long password, and don’t forget to change it all every two weeks because if you don’t it’s YOUR FAULT.”

Don’t tell me it’s got nothing to do with encryption.

Dave January 8, 2010 11:36 AM

Watch for this: Someone will lose a laptop, but because it is encrypted with FIPS approved and certified endpoint security software, the company doesn’t report it. Then someone will break into it and the unreported information will be made public.

John January 8, 2010 11:54 AM

Dave,

All encryption schemes are, technically, a one-time pad these days. Basically, an encryption algorithm like AES or RC4 is a pseudo-random number generator with a seed value, except its state is affected by a continuous stream of re-seed values (the plaintext). We generate a stream of random data the size of the entire plaintext, XOR it over the plaintext, and have our output.

This is an oversimplification. However, to answer your question of “how do we transport the one time pad,” our current model is to publish an algorithm that generates the same one time pad given the same input OR correct output and the same key. Some algorithms use methods by which the same input can produce multiple random outputs, but each of those decays back to the correct input (welcome to modular arithmetic); but basically, same idea.

jgreco January 8, 2010 11:55 AM

@Dave at January 8, 2010 11:27 AM

What you are saying is generally considered common knowledge around here. Everyone knows that if you have a ciphertext, and the key used to decrypt it, that you can recover the plaintext. All of these points are true, obvious, and I wouldn’t think of disagreeing with them.

What I disagree with is the relevance of all of your points to this article. No weakness in AES caused this incident. The weakest link in this device wasn’t the cipher used, but the “secret doorknock” that it uses to unlock. Quite clearly an implementation issue.

jgreco January 8, 2010 12:00 PM

@Dave

“But the difficulty in NOT implementing good security is inherent in the way encryption works.”

I’m not sure how to parse this sentence in a way that make sense* so I ignored it.

*Are you trying to say that it is hard to improperly implement encryption? That doesn’t make any sense at all. I don’t think this is just an accidental double negative as you emphasized “not”.

Paul January 8, 2010 12:06 PM

All encryption schemes are, technically, a one-time pad these days. Basically, an encryption algorithm like AES or RC4 is a pseudo-random number generator with a seed value, except its state is affected by a continuous stream of re-seed values (the plaintext). We generate a stream of random data the size of the entire plaintext, XOR it over the plaintext, and have our output.

XORing a stream of data generated by a stream or block cipher is NOT a one-time pad. It does not fit the randomness requirement any more than using a random number generator instead of the cipher would.

So technically speaking, no we are not using one-time pads these days. Some people may, but by far it is the minority of cases.

jgreco January 8, 2010 12:08 PM

@John

I fear you may have oversimplified a bit too much. By definition, if you are using an algorithm to recreate your pad, then you are not using a OTP.

OTPs are provably secure while stream ciphers are not, and that is an important distinction to make.

Paul January 8, 2010 12:12 PM

Sorry, the first paragraph was me quoting John. I am too used to blogs that accept limited HTML.

FIPS NIST Mishmash January 8, 2010 1:02 PM

Do you think they built in this backdoor so that they could support customers who call in because they forgot their password? I would always want to know if a vendor has a backdoor into my ostensibly “encrypted” device.

Speaking of which, I’m in the market for an external hard drive. I’m wondering if I should get one with hardware encryption or just use TrueCrypt. It’s for backup purposes because I’m tired of burning my data to multiple DVDs. I haven’t bought one yet because I’m concerned about these kinds of flaws in implementation.

Chris January 8, 2010 1:25 PM

Not cracked, rather Hijacked… Crappy article that was misleading…

Interesting to note that IronKey doesn’t have this problem….

John January 8, 2010 1:39 PM

@ Dave

There have been many instances of car vendors recalling cars that have proven defective or had safety issues. A large Japanese car manufacturer recently did just this across the whole US market, IIRC, and won strong customer feedback as a result.

A few decades ago, the Ford Pinto safety issue was treated in a very different way, resulting in a massive customer backlash. And rightly so.

In other words: do it properly, treat it seriously and the vendor can actually gain goodwill. Do it badly (or not at all) and heaven help them.

@Chris

“Interesting to note that IronKey doesn’t have this problem….”

… yet.
… as far as we know.
… says who?

Paranoid? Moi?

HJohn January 8, 2010 1:40 PM

@FIPS NIST Mishmash: “Do you think they built in this backdoor so that they could support customers who call in because they forgot their password? I would always want to know if a vendor has a backdoor into my ostensibly “encrypted” device.”


It’s possible, but I find it unlikely for two reasons:
1) it would be a deliberate weakness opening them up not only to public embarassment and scrutiny, but also to potential legal liability. If they state that data can “only” be retrieved with the password yet knew this wasn’t true, they could be subject to a whole other hosts of accusations.
2) if I were them, I’d much rather say “hey, if you lost you’re password, we can’t help you. our product is so secure that not even we can see your data once you’ve protected it.” Makes for a shorter service incident than saving customers from themselves.

But, I’ll admit it is possible.

kg January 8, 2010 1:54 PM

Seems silly in the first place. They key incremental requirement of FIPS140-2 is tamper-evidence. A device whose primary threat is physical loss/theft, obtaining the certification strikes me as abusive marketing. That it suffered from fatal implementation flaws is hardly surprising.

This seems akin to selling a 2-hour safe with in a shoulder-strap. Even if its really 2-hour certified, the design obviates the apparent goal (barring a limited class of users for whom the appropriate applications are well-understood).

JimmyD January 8, 2010 2:11 PM

@Ricky Bobby
“It is generally people trying to make a quick buck.” “Saying that everyone that steals laptops or usb sticks has the capability/knowledge to hack encryption to get the data. It just doesn’t occur 99% of the time.”

Do you have a source for that “99% of the time” statistic?

I hear time and again that no one is stealing devices for the data, but merely to sell the hardware. While that may be true is some cases, I doubt there is really any way to truely know.

Once unencrypted data (or vulnerable encrypted data as in this case) is out of your control, you have to conclude that it has been compromised. To do otherwise would simply be foolish.

kangaroo January 8, 2010 2:31 PM

@Rick Bobby: “Saying that everyone that steals laptops or usb sticks has the capability/knowledge to hack encryption to get the data. It just doesn’t occur 99% of the time.”

This is not a “hack” of the encryption! If this “encryption” is sufficient for your uses (aka, you don’t expect your attacker to bother to download some freeware to pull your data), then why are you encrypting at all?

Just using an unusual filesystem would be cheaper, faster and sufficient. Download software to use ext3 on your Doze PCs and use a regular stick. Or split it into two partitions, copy partition a to b while inverting the bits, and wipe a — it would be a teeny little program. Or give the partition a disk label, and put the partition inside that partition so you have to know a tiny bit to mount the internal partition. Or rename all your files on a 1:1 mapping that is obscure — a teeny fast program.

You don’t want encryption at all — you’re paying both monetarily and computationally for just obscuring the system a teeny, tiny bit. In ten minutes you should be able to come up with 20 better solutions than encryption.

Another example of process over thought. This is where security fails — people don’t know what the hell they’re talking about. I bet “Rick” is responsible for all his school’s PC — or is even head of IT for a school district!

Breach January 8, 2010 2:32 PM

@Dave
“Watch for this: Someone will lose a laptop, but because it is encrypted with FIPS approved and certified endpoint security software, the company doesn’t report it. Then someone will break into it and the unreported information will be made public.”

Worse yet, with the breach reporting requirements only stating that the laptop/data must be encrypted, the user’s password can be inscribed on the laptop (rendering the encryption meaningless), but yet the company will not have to report since the laptop/data was encrypted.

MarketingSpeak January 8, 2010 3:10 PM

@Bruce
“The problem is that no one really understands what a FIPS 140-2 certification means.” “Marketing departments of security take advantage of this confusion…”

I have to expect that many people are misguided by this. There is a lot of terminology used by NIST for FIPS (i.e. validation, certification, compliance). FIPS validation (which results in certification) is for cryptographic modules, not consumer level products. All one needs to do is to programmatically call into a FIPS validated/certified crypto module (i.e. RSA BSafe or MS CryptoAPI) to claim FIPS compliance. This tells nothing of the actual security of the application, process, etc. using the FIPS validated crypto module (as we found out with these USB flash vendors).

From the FIPS 140-2 Validation Certificate:
“Products which use the above identified cryptographic module may be labeled as complying with the requirements of FIPs 140-2 so long as the product, throughout its life cycle, continues to use the validated version at the cryptographic module as specified in this certificate. The validation report contains additional details concerning test results. No reliability test has been performed and no warranty of the products by both agencies is either expressed or implied.

Veracitor January 8, 2010 4:07 PM

I don’t trust Ironkey. Read their docs (public on their website)… their USB drives will load any firmware or key updates signed with Ironkey’s corporate private key, and if you read carefully, they don’t credibly promise not to sign hacks for third parties. In fact, they brag about their close relationships with alphabet agencies. They explain their technical design in enough detail to show that it is not properly secure– the user is given no assurance* that he must provide unique key material from outside to decrypt the data stored inside.

*I don’t mean assurance in an absolute sense; obviously the user would be unable to detect any backdoor Ironkey might choose to hide in the device. I mean assurance in the sense that if the device correctly implemented the spec it would protect the user. The Ironkey spec gives Ironkey a “master secret” good for all customers’ devices, so by design the device is insecure against Ironkey.

Furshlugginer January 8, 2010 4:14 PM

Now we know why Sandisk and the others refused to document the host-to-drive protocol to permit implementation of a Linux driver. As soon as the Linux driver writer found out that all the USB drives used the same key the game would have been over.

Corollary: ANY device for which you can’t get enough documenation to write a Linux driver is untrustworthy.

Glenn Maynard January 8, 2010 5:15 PM

Complex security systems occasionally have unexpected and embarrassing failures, regardless of how well they were designed and tested. This is why ongoing peer review is such an important part of today’s security matrix.

As security systems go, these are very simple. But yes, a major cause of this embarrassment is hardware manufacturers’ insistance on keeping drivers closed-source.

Is there no standard protocol for authenticating to an encrypted USB storage device? If there is, this failure is an order of magnitude worse–there should be standard USB test harnesses to verify that drivers are behaving properly.

If there’s no standard for this, then that’s another major underlying problem. Standardizing this would let users use third-party, verifiable open-source drivers. Short of that (open-source drivers in Windows for any reason are rare), it’d mean that Windows would probably have generic built-in drivers for these devices. That’d be a major step up–not because Microsoft is writing the drivers, but because theirs would be under much heavier scrutiny than individual third-party ones, even without source code.

Wow January 8, 2010 7:09 PM

Amazing how many followers of a security website don’t seem to “get” this flaw – all the usb keys involved “Use The Same Access Key ” – pretty much ignoring the password the customer types in. This is incredibly stupid.
It’s like selling 100,000 door locks all with the same key, but bragging how good the lock internal mechanism is.

Seems like a similar number of followers advocate security by obscurity as well.

The point is, these are sold as secure USB keys, and they are not at all secure. You may as well save your money and buy a plain unencrypted USB flash stick.

Whether or not a person needs an actual secure encrypted USB flash drive is another question, but if you pay security, you should get security..

trsm.mckay January 8, 2010 8:34 PM

On the whole, I think FIPS 140 provides some value, but this latest attack points out its big weakness. I have commented here before on this problem, like when Michael Bond attacked the decimalization tables on the IBM 4758 a few years back.

Before I get into that actual weakness, lets define some terms to help make it clear just where the problem is (borrowing from Common Criteria): Protection Profile (PP), Evaluation Level (EAL) and Target of Evaluation (TOE). Probably because FIPS 140 predates CC, it combines PP and EAL together in 4 levels of rating (1 to 4). BTW – a pet peeve of my is the prominent mention of EAL by marketing folks, but with no details about the PP or TOE. The complaint about Windows NT’s Orange book rating illustrates the point, the PP and TOE did not include networking, hence it did not have much real world value. Before leaving this aside, I would be remiss if I did not point out that the latest Windows Vista certification did include networking and many other components.

So my real complaint about FIPS 140 is that the TOE tends to be too small and the PP is not through enough for logical security. My favorite illustration is TRSM (tamper respondent/resistant security modules) intended for host based PKCS #11 implementations. FIPS 140 is pretty good at making sure you have implemented PKCS #11 correctly (correct algorithms, good RNG, etc.). But what it misses is that straight PKCS #11 does not work very well for server applications.

Consider that PKCS #11 was originally designed for small tokens (like smartcards) that the user could present to make crypto calculations. The token is normally locked down, and won’t perform any sensitive calculations until the user supply’s their password. This works great in the context of a user presenting a token for limited transactions (like logging on, or approving a sensitive action) so long as you can ensure the password is not cached or captured.

But the whole thing breaks down once you start using PKCS #11 for servers. No longer are the tokens brought into the system by the user (part of what they have), but instead they are permanently installed on the server. And it is no longer a real person providing the password, instead it is applications that must store the password and provide it (in stock implementations – in the clear) to the TRSM each time they want to do a sensitive operation. Obviously anyone who can monitor the communication channel can recover the password and improperly perform sensitive operations. Remote hackers can capture the passwords stored by the applications, and mis-use them as well. In short a stock PKCS #11 token used for host systems cannot prevent insider attacks — the very thing FIPS 140 levels 3 and 4 are supposed to ensure.

So why did companies get host form factor PKCS #11 based TRSMs certified at levels 3 and 4? It is because the TOE was set too small, and the PP was not through enough (it either needed additional tests or to require a larger TOE).

You will note that this PKCS #11 vulnerability (which I have described on this blog several times in the past) is very similar to attacks on the USB tokens. I have not read the details yet, but the problems sounds so similar I wonder if the tokens were using stock PKCS #11?

Finally, let me close this with a restatement of what FIPS 140 actually does right. The have an extensive PP (and the labs have the derived test cases needed to test it) for physical protection. Not aware of any other public standard that does this. The PP is also good at making sure algorithms and RNG are correctly implemented. The failing is not evaluating the larger system of how those tested components are used.

trsm.mckay January 8, 2010 9:01 PM

@xxx wrote: 1. It only takes one person to crack it and make a tool available. The rest of us can just, you know, download the tool and use it, without understanding the details.

Things are more nuanced when using hardware security. A hardware security designer (or at least a good designer) is always aware that there product can be cracked given enough time, access, and resources. The trick is to design things so that the opponent has to go through the expense of cracking the hardware each time they want something of value. Obviously there is a spectrum of security problems, depending upon cost, required tools, and knowledge is needed; to perform a particular attack.

@bob wrote: Last time I looked, the highest-level FIPS standard for operating systems assumed that the system being tested was being run on an isolated network, where the only people around were white hats with normal user privileges.

Absolutely not, you are confusing several different standards and problems together. FIPS 140 at levels 3+ are intended to resist all external attackers and fully knowledgeable insiders up to some point of collusion (e.g. if you require 3-of-5 to do an operation, then it will take 3 crooked employees to penetrate the system under evaluation).

@kg wrote: They key incremental requirement of FIPS140-2 is tamper-evidence.

Level 3 requires tamper-evidence (e.g. can’t tamper with it without leaving some evidence). Level 4 requires tamper-respondence (detect the tampering and securely erase all the secrets before they can be revealed). I’m not sure what you meant by the rest of your post.

trsm.mckay January 8, 2010 9:15 PM

Rereading what @bob said, he was talking about FIPS and standards for operating systems; perhaps I need to clarify. Now NIST has a lot of standards (which FIPS publishes), and some of them do apply to OS. I guess it is possible that some of them work they way he mentions, but it is pretty unlikely.

I think he is primarily remembering the Windows NT Orange book certification (which I mentioned in an earlier post), which incidentally was DOD, not FIPS. But even considering the NT evaluated without a network, this is not an accurate statement. Orange book requires controlling privileged users, which Windows NT did.

As mentioned before, it is a good example of why choosing the proper PP and TOE is more important than the EAL.

Nick P January 8, 2010 11:05 PM

I just had to say wow at the comments. This could be the least productive comment section I’ve ever seen on this blog. I see a few regulars trying to steer it in a good direction, so here’s my two cents. Clive Robinson and I already fleshed secure drive encryption out pretty good in a previous post. If one is building hardware, then it will likely be an inline media encryptor. Here’s a start for someone looking to design one:

  1. Enforce red-black separation. The plaintext must NEVER touch any resource shared with ciphertext side. The crypto component usually bridges the two sides & uses periods processing (data scrubbing) & partitioning of memory to ensure this.
  2. Keep the interface simple on the plaintext side. THe implementation should be layered to allow for verification, possibly formal.
  3. Use tried and true methods. Keep the protocol as simple as possible.
  4. The device includes a TRNG and generates all keys itself, while having self-test features.
  5. The device should use two factor authentication if possible.

  6. The device MUST have a “trusted path.” This is a way to ensure user input is ONLY sent to the cryptoprocessor. In my designs, I usually have a miniature keyboard or something, similar to SafeNet’s, that connects directly to the encryptor. Even a PIN pad is decent & an LCD for status or requests is better, if possible. External hard drives are idea for this kind of stuff, as they are expected to be a bit bigger than USB.

  7. In my designs, I create the encryption key using a combination of a user-supplied password & onboard (or USB-stored) key, mixed with SHA-2 over a number of iterations.

  8. If intentional data destruction is a must, like for classified info, then a Erase button or command should be included & the master key should be in a predictable location. Erase should first overwrite master key several times, then any plaintext (Red) buffers, & then try to burn random data into flash memory that contains user-specific key. If time is left, further sanitize RAM & storage. If a strong password is used and Red data is overwritten, then ciphertext is unrecoverable (assuming good crypto). If password is weak, then user-specific key must be wiped as well. All of this can happen in seconds. My strategy is just press the button and throw it as high as you can. It will be clear by the time enemy grabs it. 😉

  9. Covert channel analysis. It’s quite easy, although tedious, to design a system without covert storage channels if one uses the MILS architecture & a separation kernel. Many out there. Timing channels are trickier & require clocking inputs and outputs & preventing discrepencies in timing. Formal methods should also be used to catch them at the logic level. Some can be eliminated, but many can only be measured. Don’t discount the importance of these: real-world “side” channel attacks are based on exploiting covert channels. If high robustness is needed, then so is covert channel analysis.

  10. EMSEC. If your device is to be considered high assurance, rather than medium, then you will also need Emanation Security. The design should meet at least TEMPEST Level 2 standards and protect against both passive and active electromagnetic emanations. Many think TEMPEST is impractical. WRONG! It’s been used against US many times and a recent attack by (Cambrige?) researchers used active emanations to drop a RNG’s entropy from 32 bits to like 8. High assurance drive encryption requires TEMPEST protection.

Pretty tired tonight so may have left out something. Hope this helps you guys looking to design secure drive encryption. Most important parts are: red/black; design for verification; tried and true beats neat and new; crypto key = onboard-generated key (preferably stored off device) + user password; trusted path. That last feature is essential for secure distributed systems and would have prevented this attack. Have at it!

Oliver January 9, 2010 5:24 AM

Has anyone ever seen a certificate that is worth the paper that it is printed on?

Just my $0.02 🙂

Nick P January 9, 2010 1:26 PM

@ Oliver

Actually, some certificates are quite good. For instance, a CISSP tells you the person has a certain variety of general knowledge and maybe a little experience on a subject. A Red Hat Certified Engineer is usually very capable of working on a Red Hat box. And so on. Plenty of good certifications. Making a wide generalization that certification is useless for functionality or security is foolish.

In the security/crypto realm, it’s usually the Orange Book, C&A, Common Criteria or NSA evaluation. Common Criteria is mostly useless at lower levels. An example of a meaningful certificate is Green Hill’s INTEGRITY-178B RTOS being certified to “High Robustness,” with an EAL6+ rating. The OS was developed like an EAL7 application against a realistic protection profile, mathematically proven down to the code, and survived months of NSA pentesting with source code. So, thanks to their certificate, if their security target says it can do something, I believe it probably does. On the other hand, if they say something not in the protection profile (“our virtualization scheme stops all attackers”), then I’m not buying into it.

Yoram January 9, 2010 2:02 PM

Proper certification is a very good thing, but FIPS certification is useful mainly for marketing but for little else.

A good certification process will have actual attacks done against the product. This only happens with Common Criteria and not with FIPS.

Another issue – a CC certified product is more likely a mature product than an uncertified one. This helps to weed out immature products, even if it does not necessarily mean a more secure one.

Peter Hillier January 9, 2010 3:36 PM

To those that commented a common criteria certified product is likely more mature, note that at least of of the SanDisk products in question, notably the Sandisk FIPS 140-2 product, was also certified EAL-2 by the Australian lab that conducted the FIPS testing!

Nick P January 9, 2010 11:25 PM

@ Peter

EAL 2 is the 2nd lowest level of the common criteria. The security target and/or protection profile tell you what security traits the product claims to have. Then the lab evaluates the product against those claims with a process of a certain amount of assurance. In this case, the claims made in the product were functionally and structurally tested as in EAL2. EAL2 doesn’t require source code inspection or even a “methodical” evaluation of each component. In other words, they turned it on, tried a few things, made sure paperwork was in order, and then stamped EAL2 on it. Maturity isn’t really an issue at that level, reinforcing your point.

However, starting at EAL4, I would agree with Yoram that evaluated products are likely more mature. Even FIPS 140-2 has some value in its algorithm requirements: how many products have been broken for using hombrew xor-based crypto? So, it helps a bit and the security targets and EAL’s, for those that really understand them, help compare and contrast different offerings pretty well. I still think no product certified below EAL5 should advertise itself as secure due to common criteria certificate.

Info on Common Criteria, particularly security target & similar documents
http://en.wikipedia.org/wiki/Common_Criteria

Info on EAL2 assurance level
http://en.wikipedia.org/wiki/Evaluation_Assurance_Level

Peter Hillier January 10, 2010 9:46 AM

@Nick P

I totally understand the difference and agree with you, but in this case, do you think the protection profile of the stick would have changed anything, given the FIPS 140-2 cert was already in place?

Nick P January 10, 2010 1:16 PM

@ Peter

We already covered the inadequacy of the FIPS 140-2 cert. It’s mainly focused on crypto, while CC certs focus on whole system. That’s important: this attack is on a separate system that the SanDisk device trusts. So, I guess your question is: is there any way CC certification could have shown the attack was possible? Short answer: definitely.

I reviewed the CC documents for the product. In Australia, there is no Protection Profile for this stuff, so their functionality is in the security target. The important part is in 2.4.1.25 where it says it can establish a trusted path between it and a specific application on the PC, but that’s “not included in the TOE.” The TOE is what’s being evaluated and the authentication system isn’t in it. Hence, impossible to catch a flaw there. Also strange considering that software is at high risk.

The real problem here is that of a trusted path, which is impossible in commodity OS’s, and lack of source-level review. You don’t need to even look at the software or source until about EAL4 and at EAL5-7, NSA gets source for pentesting. This error was so obvious I’m sure NSA would have caught it, so an EAL5-7 cert would, as usual, mean the product is pretty secure. EAL4 might have caught it if augmented with source review or if the OS had a trusted path, like BAE XTS-400, INTEGRITY Workstation, or Turaya Desktop.

If anything, I think these products make the case for open source (or at least evaluated source) software in any security certification process. Additionally, it reinforces the idea that we need a secure desktop platform because most of these crypto exploits start with a design flaw in base OS. Products like those above allow us to isolate the Password Entry app and ensure that only it receives user input. It also could ensure that only that app communicated with the device, which means the attack would have failed even with the poor software design. The problem is that the TCBs of current OS’s are so large that no product running on them can ever be certain to work right or achieve its security goals.

SanDisk Documents
http://www.dsd.gov.au/infosec/evaluation_services/epl/data_protection/SanDiskCruzerEnterpriseFIPS.html

Turaya Desktop Data Loss Prevention Solution
http://www.teletrust.de/fileadmin/files/Workshops/070205_RSA-WS/RSA-07_Dt-WS_Vortr8_ifis-Pohlmann.pdf

INTEGRITY Desktop based on EAL6+ RTOS
http://www.ghs.com/products/rtos/INTEGRITY_workstation.html

Dave IronKey January 10, 2010 2:30 PM

@ Veracitor

Many of your statements regarding IronKey’s security architecture are incorrect.

This is 100% incorrect: “The Ironkey spec gives Ironkey a “master secret” good for all customers’ devices, so by design the device is insecure against Ironkey.”

There is NO backdoor, and no “master secret”. Such a design would be fundamentally ridiculous, and completely unsalable. Do you really think that IronKey enterprise and government customers haven’t spent man-years doing in-depth security analysis of the architecture and even the source code of the implementation???

Our design documentation, including our FIPS 140-2 Level 3 security profile, clearly explain how AES key generation is done, how AES keys are protected (ie. encrypted with a hash of your password), and how password verification and brute force protection is implemented. These designs are both procedurally secure (ie. brute force counter stored in a hardened CryptoChip), and cryptographically secured (ie. AES key cannot be decrypted without your password). Also the hashes used to verify your password and to decrypt the AES key are different (multiple hashes with salt for AES key).

Also, the entire product is FIPS validated, not a sub-component module.

http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp1149.pdf

I’m not sure where you get that we “don’t credibly promise not to sign hacks for third parties.” Why would we sign custom firmware for anyone??? One, the development and maintenance of multiple firmwares would be cost-prohibitive, and two, why would we risk the integrity of our brand and products???

Dave IronKey January 10, 2010 2:36 PM

@ Nick P. Great post on Common Criteria. I have heard that the US Government is working on developing a Protection Profile for hardware-encrypted portable storage devices.

These recent events re-enforce that security certifications are important, but do not by themselves make a product secure. Buyers, and vendors themselves, need to understand exactly what components have been validated, and whether they are being validated against a standard set of security requirements, or are simply being “validated” to do what the vendor says they do, even if that implies security vulnerabilities.

trsm.mckay January 11, 2010 12:46 AM

@Nick wrote: A good certification process will have actual attacks done against the product. This only happens with Common Criteria and not with FIPS.

The labs that certify FIPS 140 products include a variety of physical penetration tests. Perhaps you meant something broader than “product”?

@Yoram wrote: This only happens with Common Criteria and not with FIPS.

Can you point to a CC standard PP that covers crypto hardware? I searched pretty extensively about 5 years ago, but only found some smartcard PPs that were much less through than FIPS 140. Of course I might have overlooked something, or perhaps a new standard has come out? Unless there is a comparable PP (and set of derived tests), this talk about CC being superior is just theoretical. Unless you trust individual entities to somehow on their own come up with an adequate PP?

@Dave Ironkey wrote: I have heard that the US Government is working on developing a Protection Profile for hardware-encrypted portable storage devices.

I think this could be good, but I’m not holding my breath until I see it (since I first heard of this plan in the late 90’s 🙂 . The obvious benefit would be getting the good parts of FIPS 140 into a form where they can be used as a sub-component of other CC PPs.

Finally, Nick P’s comments about the weakness of the PP is right on. It does not matter how good the EAL is, all the EAL establishes is that product operates like the design says it should operate. If you want to know if the design is actually secure, that is where the PP comes in (with proper selection of the TOE of course).

Stephen January 11, 2010 3:33 AM

These hardware encrypted devices seem to be cracked by different teams every other month. It’s a concerning trend as hardware encryption is being pushed to laptops – I certainly hope the Chinese manufacturers are not as careless as our own.

Besiders, I don’t see why anyone else than consumers would buy these expensive yet faulty point solutions when there are more capable software solutions available for a fraction of the cost. Software protection at least works with all devices like external hard disks and memory cards, not just usb sticks.

We are currently screening different solutions for our staff and one of the most promising technologies seems to be Envault’s system; it’s easier and cheaper to protect all our existing drives than try to get all users to use hw encrypted drives. Plus there are no user passwords at all so none of these password fails are possible, and there’s built-in remote disable and auditing in the protection method.

Twylite January 11, 2010 6:01 AM

Four notes on FIPS-140:

(1) It tests that your algorithms are (statistically likely to be) implemented correctly.

(2) It ensures that your Critical Security Parameters (keys, other secret values) are protected against disclosure.

(3) It validates that your implementation performs according to its public specification.

(4) It validates that your hardware provides the level of protection claimed in the public specification.

The public specification for the Kingston device is at http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp929.pdf .

The CMVP has validates that this device works as documented, and that the documentation meets certain requirements in terms of algorithms used and key management principles.

It does not validate that the device works as expected by an end-user who has been misled by the marketing information. That is not the point of FIPS-140.

Tobby January 12, 2010 8:58 AM

Well, it is worth remembering that whilst the ‘Evil Maid’ attack can be used against a fully encrypted computer a simpler attack can be made against all of these encrypted usb sticks. Just install a copy of the USB Dump facility and you will get a copy of the USB’s data, each and every time it is inserted into a compromised computer.

USB Dump
“USB Flash Drives dump without query
Januar 11, 2010 Kategorie: Portable Programme , Software & Co , Windows January 11, 2010 Category: Portable programs, software & Co., windows

2006 habe ich mal über die Software USB Dumper gebloggt. In 2006 I once blogged about the software USB Dumper. Diese Software kopierte unsichtbar jeglichen Inhalts eines USB-Sticks auf die Festplatte. This software is invisible to any content copied a USB stick to the disk. Ohne Abfrage – einfach im Hintergrund. Without query – just in the background. Das Tool von 2006 läuft nicht mehr unter Windows Vista / 7, deshalb habe ich Bene gebeten “mal eben” etwas neues zu basteln. The tool of 2006 no longer runs on Windows Vista / 7, so I asked Bene “tinker times just” something new. ”
http://stadt-bremerhaven.de/usb-sticks-ohne-abfrage-dumpen/

HJohn January 12, 2010 10:13 AM

@ Tobby at January 12, 2010 8:58 AM
@ Twylite at January 11, 2010 6:01 AM


Yup.

It doesn’t matter how strong one’s encryption is if it can be bypassed.

As I tried to point out before, unless the really exploitation is misunderstood, decision makers may implement the wrong solution, which is a waste of resources (and worse, may not work).

Dave made a good analogy about cars, about how one faulty part can wreck the whole car, which is absolutely true. However, it is often much less expensive and just as effective to fix the faulty part than buy or built an entire new vehicle.

Veractor January 12, 2010 3:20 PM

@ Dave Ironkey

Your firm seems to have removed the “Ironkey Enterprise” docs which explained how the Ironkey devices trust firmware updates from Ironkey from its website. Why?

Also, as Furshlugginer pointed out, since you won’t tell anyone what they need to know to write a Linux driver (that is, document the host-to-Ironkey communications protocol), your devices cannot be trusted. When you say “users must install opaque, proprietary, Windows-only drivers,” you’re saying “don’t trust our devices.”

Finally, your firm’s docs make obviously bogus claims, like saying Ironkey corporate admins must have Ironkey devices. That’s marketing, not technology, since an administrator’s “Ironkey” could obviously be emulated in software (assuming your firm signed the relevant public keys).

gh0st January 12, 2010 4:41 PM

They probably used Vinetto a well known forensics tool to recover the Thumbs.db from the flash drive and found it had the decryption hash stored inside it. Not saying thats what they did, but it’s a possibility or its what I would have tried myself.

gh0st January 12, 2010 5:17 PM

@ Dave

“There is NO backdoor, and no “master secret”. Such a design would be fundamentally ridiculous, and completely unsalable. Do you really think that IronKey enterprise and government customers haven’t spent man-years doing in-depth security analysis of the architecture and even the source code of the implementation?”

Then the device would not be allowed for general sale to any consumer full stop. If there was no way to break the security of your product, then you would be refused to re-sell said product.

In the United Kingdom the Regulation of Investigatory Powers Act (RIP) of 2000 makes it a crime to withhold encryption keys from the government. The United States has a history of trying to limit civilian use of military-strength encryption. Legislation was proposed to require government back doors be built into encryption software during the Clinton administration. These proposals failed due to commercial opposition and protests that encryption bans simply would not work. Public outrage over post-9/11 legislation, ostensibly for “homeland defense”, has created greater awareness of encryption techniques. Government and law enforcement agencies, consequently, have a renewed interest in limiting access of strong encryption to the general public.

caf January 12, 2010 8:09 PM

It seems that no-one else has noticed that the described algorithm also reduces the key down to an effective 64 bits.

I am also very curious to know how the supposed software patches fix the problem – at the very least it would appear to still be susceptible to an offline guessing attack.

gh0st January 13, 2010 3:14 AM

@ caf

The only workable “attack” on the Rijndael (AES) cipher is published as “brute force”. But I digress that if applications can be written that effectively attack and recover keys from WEP by using packets of random data filed with entropy Data by using the same Brute force method, then it stands to reason applications can be written that recover and brute force peoples Private encryption Keys using a similar method.

When I see commercial encryption products on Sale I always avoid them and mark them as untrusted. If it said Encryption with a OTP using 8126bit Sapphire 2 & Serpent in CBC (Cipher Block Chaining) then I would consider its implementation as a secure product.

So in closing, all I can say is, if you dont want to be busted with your Stash, then make like a Hippy and hide the Hash!

gh0st January 13, 2010 4:34 AM

Remember when the NSA assured everyone that DES was unbreakable? IBM insisted there was no backdoor.. Yet in contrast how long does it take to break DES with John the Ripper today!?! Someone told a load of lies.

Luis January 13, 2010 5:17 AM

“FIPS 104-2 Level 2 certification …” <– an obvious typo but nobody seemed to have noticed?

Shouldn’t this be “FIPS 140-2 Level 2 certification “?

jgreco January 13, 2010 9:07 AM

@gh0st

“When I see commercial encryption products on Sale I always avoid them and mark them as untrusted. If it said Encryption with a OTP using 8126bit Sapphire 2 & Serpent in CBC (Cipher Block Chaining) then I would consider its implementation as a secure product.”

Funny, because if I saw a product claiming to use encryption with a OTP, I’d know they most likely had absolutely no idea what they were even talking about and I would RUN, not walk, away.

“Remember when the NSA assured everyone that DES was unbreakable? IBM insisted there was no backdoor.. Yet in contrast how long does it take to break DES with John the Ripper today!?! Someone told a load of lies.”

I suspect you don’t even know what John the Ripper does, it is quite apparent you don’t know how it works. John the Ripper is a password cracker, not a DES cracker. It rapidly guesses passwords, hash them, then compare them to the hashed password it is attempting to crack. No “backdoor” in DES is involved.

The reason plain old DES isn’t used these days is because it’s keysize is way to small, something we’ve pretty much always suspected.

A little bit of paranoia is always a healthy thing to have, but keep it reasonable and keep your facts straight.

HJohn January 13, 2010 2:25 PM

According to the SANS Institute:

–USB Flaws Prompt NIST Review of Cryptographic Module Certification Process (January 8 & 11, 2010) The National Institute of Standards and Technology (NIST) is investigating security flaws in several brands of USB drives that were thought to be secure. The vulnerability can reportedly be exploited to allow attackers to read data on drives protected by the 256-bit Advanced Encryption Standard. The vulnerabilities lie not in the cryptographic module, but in the software that authorizes decryption. NIST will be considering whether it should make changes to its validation process, as the USB drives in question all met the criteria. SanDisk, Verbatim and Kingston, the three companies that acknowledged the vulnerabilities in their devices, have issued fixes for the problem.
http://isc.sans.org/diary.html?storyid=7894

Nick P January 14, 2010 3:06 AM

@ ghost

8,126 bit Sapphire 2!? Wow, that sounds like some serious encryption! Definitely better than a measily 256 bit ECC or 2048 bit RSA or a 256 bit AES. It has like 32 times the bits of AES, which means it should be ^32 harder to crack! Sign me up!

@ jgreco

I’m with you this time. Marketing guys usually don’t know jack about the technicalities in crypto and it shows when the posers try to pass for engineers. This dude is so full of it it’s unreal. Back on topic, FIPS and CC on encrypted USB drives isn’t adequate with regards to defining security and needs improvement. Saphire 10,000,000+ bit or whatever probably won’t change that…. (laughs hysterically)

Robert Wann January 14, 2010 6:57 AM

The said SySS report developed a memory patch program to be inserted into the HOST PC memory (where the main executable program ExmpSrv.exe ver 2.0.5.32 resides) to completely bypass the password authentication process. A 32 bytes signature block data is identified during password reset and is found to be the same on all affected USB flash drives regardless of multiple password changes or drive reformatting. Using that 32 bytes signature block along with the memory patch program can effectively bypass user’s password authentication thus would allow the access to read/write of an affected USB flash drive. Note that 32 bytes signature block data disclosed by the SySS report is the AES ECB 256-bit decrypted result using the correct password as a key.

The HEX of a 32 bytes signature block found on all affected USB drives:
00 00 00 00|B5 D3 68 DC|8A 4D A5 B1|FD 2E 68 84|
4D F2 0D 52|1E 2B F9 CD|00 00 00 00|00 00 00 00

Being said, however, the report failed to address if all user data blocks are indeed AES 256-bit encrypted using a secret AES key as claimed by the product as if user’s password is used to decrypt the same 32 bytes signature block data and that 32 bytes signature block does not look like a valid secret 256-bit AES key, there could be no secret AES key at all, or something simple like an XOR algorithm does that trick? Or there may be possible that signature block is used to decrypt the actual secret 256-bit AES key? Can Bruce help talk to SySS for further finding?

Endareth January 20, 2010 11:14 PM

@gh0st
“In the United Kingdom the Regulation of Investigatory Powers Act (RIP) of 2000 makes it a crime to withhold encryption keys from the government.”

My (admittedly limited) understanding of the RIP act was it’s intended to force the actual user of the encryption software to hand over the encryption keys, nothing to do with requiring some sort of backdoor in a product to bypass the encryption.

“Government and law enforcement agencies, consequently, have a renewed interest in limiting access of strong encryption to the general public.”

There’s a significant step from a government trying to limit access, and them forcing a requirement of encryption devices having backdoors. Especially when the device in question (in this case the IronKey), was partly government funded and used extensively by US government and military. Something tells me the US government wouldn’t be too happy about using devices they knew had security flaws.

Clive Robinson January 21, 2010 4:01 AM

@ Endareth,

“Something tells me the US government wouldn’t be too happy about using devices they knew had security flaws.”

Actualy there is a history of making machine “field ciphers” have flaws or be excessivly brittle in design.

And there is a reason postulated for what the NSA where up to when they did it, and it always comes down to “reuse against the US”.

If you regard crypto as a weapon that can be used against you it would be nice if you could design it to be strong enough for you, but blow up in your enimies faces when it is used against you…

One early mechanical machine (Haglin C34 using the “coin counter”) was found on later anaylisis to have many many weak keys and only a few strong keys (It is possible to know which is which but it is not obvious or simple).

Thus the logic of weak keys by design works as follows,

1, Alice having designed and built the equipment is fully aware of the issue.

2, Alice “issues keys” to those she issues the machine to.

3, Alice only issues keys from the stronger keys.

4, Bob is happy with the arrangment because he (is/maybe) only liable for equipment and small amounts of key loss.

5, At some point, as is expected of all field systems, Eve captures a device and occasionaly a limited amount of keying material.

So far this gets you up to “the enemy knows the system” point.

However…

Eve might at some point think,

Alice uses this equipment, I cann’t find a way to crack their messages other than by getting key material, so I’ll use the design for my field ciphers as well.

And there are a whole bunch of examples of people either using captured equipment or building their own identical or very similar system based on Alice’s system.

So lets make the following assumption about the keyspace

A, 5% – very strong
B, 10% – strong
C, 20% – medium
D, 50% – weak
E, 15% – very weak

Alice only uses the first three,

C – for front line “fox hole” traffic to comand post.
B – for comand post to division HQ.
A – for division to brigade etc.

Thuss the majority of Alice’s traffic which is just tactical and there fore has a life of maybe a few days uses the medium strength keys. And due to the volume of that traffic and the assumption Eve has a captured machine, will alow alice to get some low level tactical information over time.

However the information Eve does get is either already known to Eve via her own more timely command field reports, or of little or no use. That is it is past it’s shelf life or only unit and command designators that will change.

There are similar traffic level and lifetime assumptions for the strong and very strong keys.

That is Eve only gets low end tactical and very rarely gets command etc.

Now think about when Eve re-uses the system…

If she has little or no idea that the majority of the keys (80%) are weak she may just issue keys to all levels of her troops randomly.

This gives Alice a field day…

Thus Alice has an 80% chance of reading any traffic including the highest levels.

Now lets look at Ian one of a number of not realy trusted allies of Alice or Bob on the fringes of the war.

For any one of various reasons Ian decides to change sides. But as he has the equipment Alice issued and his troops are used to using it, pragmaticaly he is still going to carry on using it.

He or Eve now issue the keys to be used. Well he is effectivly in the same position as Eve unless he “re-uses” previous key material Alice issued (which is a major NoNo for obvious reasons).

Now in 1973 Fred P. Winterbottom blew the gaff on what the Americans and British had done with regards mechanical cipher machines a quater of a century earlier due to various claims and counter claims it became clear by 1980 that mechanical cipher machines realy where poisoned chalices. So most countries lacking the abilities aproached a little Swiss Company Crypto AG. Now one of the things Crypto AG had a habit of doing was supplying brochers that had “example key generation system” examples… In the 1990’s it was found that apparently Crypto AG had maintained a close liason with the bods at Fort Mead (NSA Research office 😉 supposadly due AG supplying NATO countries with crypto kit.

However 1973 was a funny year in many ways it marked the start of Electronic Data Processing in non major companies and there where those funny little experiments with data comms going on that gave rise to the internet. Oh and some Dutch TV engineer started playing around with reproducing pictures of peoples TV screens to see if PO TV licencing vans realy could do it. Within a year Intel produced the first micro computer chips and the writing was on the wall for crypto.

The NSA kind of admit they “got it wrong” with DES they did not realise just how fast software would make bespoke crypto chips obsoleat.

Which brings us around to making brittle systems. The NSA realised that as people had done with DES they would make software versions with “variations”. So something had to be done.

Somebody somewhere came up with the idea of designs that where like building with glass. As long as you did it one way it was strong and any variation would make it exceptionaly weak.

This resulted in the ill fated clipper chip and capstone project that was killed of at the end of the Bill Clinton Years.

Nobody outside of the NSA know what the NSA are actually doing but… Old habbits die hard and it’s not just elephant’s that have long memories.

But does the NSA matter anylonger well yes.

They are still tasked with protecting not just the US Government but the people of the USA which includes those wearing uniform.

They know just as well as the bazar salesmen in Iraq and Afganistan that US military personel lose thumb drives which turn up for sale “secrets and all” on market stands.

However the enemy has changed they are not Government funded organisations they are small. The loss of encrypting thumb drives in Iraq is said to out number the number of insurgents…

Thus crypto as a weapon of war is back on the table and the enemy is using US equipment against US personel.

AES may well be secure but what about the keys?

Specificaly key generation and key handeling.

Business want’s emergancy key recovery so do US and other Nations intel community…

Thus you may now be less surprised at just why FIPS works the way it does and the NSA still make “codeword” encryptors.

Welcome to the 20th Century “Great Game” the question is what is the 21st Century “Great Game”…

Tom Corwine January 26, 2010 9:34 AM

I know I’m a bit late in responding to this, but since no one else commented:

@Till “One-time pads work fantastically in certain use cases. Every bank in Germany uses TANs; to do online transactions, they send you a list of 6-digit numbers in a secure envelope via snail mail. For each transaction, you must enter one of these numbers. When you’re running low, they send you another.”

An One Time Pad (OTP) is NOT the same as a one time password.

An one time password is great for defending credentials from keyboard sniffers and the like, but that has nothing to do with encryption.

An OTP is quite a different thing.

black monk February 17, 2010 10:39 AM

John posed the most interesting question, and so far no-one has commented on it.

“How did the same flaw appear in the product of 3 different companies? Is this because all three are outsourcing to the same 4th company? Or is it that they’re following an industry standard? Or has someone who’s involved in the design been moving from company to company? ”

It would be interesting to know if the pw is the same on all devices.

Albion Zeglin August 19, 2010 10:04 AM

There has been excelent discussion of the limits of FIPS validation. I always inquire as to the specifics of any validation before accepting. I value validations of the entire devices much higher than specific modules.

However, there is a more basic issue here. What protection can inherently be provided by the architecture if correctly applied. I have investigated and rejected multiple “Secure” USB drives that rely upon host drivers for both encryption and fingerprint locking. Without hardware enforcement such as the Ironkey or separate key storage (such as Pointsec), the encryption can not utilize more entropy (bits) than the password itself provides.

Hardware protection is not perfect, but it can raise the bar significantly. Utilizing a separate channel for key transfer can raise the bar higher. But the bigger risk is compromize of the host machine itself. Consider all the risks, not just the easy ones.

Stephen Toussaint September 4, 2010 11:31 PM

Have read “Secrets & Lies” …thank you.
Independent research like yours inspires
my work. Therefore I ask your opinion. Is there any OS environment more secure than an original DOS 3.0 diskette running on a PC without a modem or wireless circuit as an isolated computing environment to receive keyboard PT msgs. and return CT text files on a 3.5″ floppy disk to hand transfer to another system for email exchange?

Harvey Parisien September 15, 2010 1:11 PM

The folks at IRONKEY.COM claim to have built a FIPS 140-2 Level 3 USB key much like the ones you comment on here. Now Lockheed Martin has adopted this product as one of their own new product offerings. It would be interesting to extend your initial comments and research here to this current variation. It seems to me it is still likely just as weak.

Roy December 22, 2012 8:57 AM

Written By: Dave • January 8, 2010 8:53 AM

Please stop saying there is nothing wrong with the encryption, it’s just the implementation that’s wrong.

That’s like saying they were perfectly healthy, except their heart stopped beating.


It is not like saying it is BETTER than all other Encryption Devices because there is a “Special Spot” for Special People to write their Password on it.

WE are NOT saying “there is nothing wrong with the encryption”, NIST is; and they tested it to “Level 2”. Next you will say that you can leave it in the Box, not plug it into anything and it will exceed your expectations (and what it was Tested for).

Read: http://en.wikipedia.org/wiki/FIPS_140-2_Level_4#Security_Levels and see what you get.

That is the only claim being made. If you want Level 3, 4 or two rounds of AES-256 then get that instead. If you need “authentication” then that is what you need. If you need Common Criteria Certification then buy that instead. The Crypto Algorithm was implemented correctly, no other guarantees apply.

daser camp January 23, 2014 7:34 AM

my question is that how to crack the incripted drives by bit locker which is present in windows 7

Anon May 18, 2014 5:48 PM

FIPS means “key escrow”. I just modified the RNG in source, to return genuine random numbers, and FIPS tests now fail. Tracing back the cause – your “random” numbers must be the output from the NSA escrow keys, or FIPS will not validate.

FIPS exists principally to ensure no US Government employees can create ciphertext that the NSA cannot easily read, and of course, to help let them in to anyone else in the world silly enough to use FIPS too.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.