Schneier on Security
A blog covering security and security technology.
« Tom Engelhardt on Fear of Terrorism |
| Comprehensive National Cybersecurity Initiative »
March 4, 2010
Crypto Implementation Failure
Look at this new AES-encrypted USB memory stick. You enter the key directly into the stick via the keypad, thereby bypassing any eavesdropping software on the computer.
The problem is that in order to get full 256-bit entropy in the key, you need to enter 77 decimal digits using the keypad. I can't imagine anyone doing that; they'll enter an eight- or ten-digit key and call it done. (Likely, the password encrypts a random key that encrypts the actual data: not that it matters.) And even if you wanted to, is it reasonable to expect someone to enter 77 digits without making an error?
Nice idea, complete implementation failure.
EDITED TO ADD (3/4): According to the manual, the drive locks for two minutes after five unsuccessful attempts. This delay is enough to make brute-force attacks infeasible, even with only ten-digit keys.
So, not nearly as bad as I thought it was. Better would be a much longer delay after 100 or so unsuccessful attempts. Yes, there's a denial-of-service attack against the thing, but stealing it is an even more effective denial-of-service attack.
Posted on March 4, 2010 at 6:05 AM
• 97 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Sounds more like a marketing ploy than better security!?
With only space for five buttons, how would you do it?
Well, you could enter a 25-digit code to get 80 bits of entropy, which should provide enough security for the next 10 years or so.
Sure it's not very convenient, but on the other hand you get proper encryption without installing software and you do not have to worry about interoperability. Could be worse.
Do you have some reference saying that the encryption key is actually generated from (or encrypted with) the digits entered? Or are these digits simply a credential to release a factory-generated 256-bit key buried deep in the silicon or maybe only enabling the security function of some crypto chip on board? The latter would be no different from various crypto card systems out there which are deemed to be secure, some even legally mandated or required.
Their webpage says something about 2-minute lockout after entering a wrong PIN several times so brute-forcing a PIN by attaching a piece of electronics to the keypad is not a valid attack.
These are five buttons, not ten -- so 25 "digits" would only give you 58 bits of entropy. For 80 bits, you would need 35 "digits" -- and, honestly, how many people are willing to memorize even these 25 digits?
did you read the product's manual?
"After 5 unsuccessful user PIN entry attempts, your Flash Padlock will be disabled for 2 minutes. The red LED will blink to indicate the unit has been locked. After 2 minutes, you can attempt to unlock your Flash Padlock."
So it's no more than 3600 attempts per day. 7 digits will give you an average of ~4 years of brute-forcing. Sounds good enough for casual use. (Of course it's not top-secret quality, but it's much better than the rest on the market.)
I'm assuming here that there are no other implementation issues with the product, e.g. that there is real encryption instead of just a password check, and that encryption key is derived via hardware noise generator. It would be interesting to know whether this assumption is sound.
Also, if I was designing this thing, I would have made it lock permanently after 10 (or something) wrong PIN attempts. You could still reset it, but the data would have been lost.
"complete implementation failure"?
Well, even if the effective security is closer to 50 bits rather than 256, so what?
The physical security aspects (lockout, breach detection) are much more interesting than quibbling about the number of buttons.
All this talk of 77 or 25 digit passwords is moot if you check out the products page on the Corsair site.
Protection: Your own 4-10 digit PIN protects and locks access to the Flash Padlock 2
The question to ask (or my question at any rate) is, can the hardware encryption be bypassed to attempt software decryption?
I think that it's not so simple:
1. The lock mechanism is sufficient to make a brute force attack too long to be interesting. So the 30 bits of entropy of a 10-digit PIN should not be compared directly.
2. The choice of the AES 256 is maybe for performances purpose (depending on the flash controller).
So I think that the real security point here is how the PIN code is verified ?
PS: From a marketing point of view, it's impossible to ask more than 10 digits.
Clearly we need a USB memory stick into which you can plug a USB keyboard to type in a passphrase!
"From a marketing point of view, it's impossible to ask more than 10 digits."
Exactly, and ten digits is not enough to make brute-force attacks too long to be interesting.
There are 5 buttons, but it appears that pushing a button to the left gives you one number, while pushing to the right gives you another number. The buttons are labeled with two numbers on each one (0 | 1).
If any sort of valuable information were stored on such a device, the hacker would have an easier time disassembling the device, removing the physical memory chip and finding a device to read the memory chip. Considering most of the USB manufacturers use similar memory chip types such as TSOP and PQFP and all the chips tend to print their manufacturing information on the chip itself, it shouldn't be too difficult to figure out how to extract the data from the chips, thereby bypassing the security hardware. It's the same problem associated with physical access of a computer: If a hacker has physical access, extracting data is simple if you remove the HDD and attach it to another computer.
A device like this isn't meant to provide any security to people that require it for most or any documents. Rather, this sort of device prevents a crime of opportunity. It's like putting valuables into a small lock box. The lock box can easily be transported and opened by a determined thief, but it deters the less-than-determined thief from bothering.
In the end, I think something like Winrar's archive security is a bit easier to use and makes it more difficult to circumvent security methods.
Why is the "lockout time" even relevant? It's DRM.
If you have physical access to the device, you can crack it open and access the Flash storage chip directly. That way you will be able to crack any reasonable key with little computation time.
it looks like there are only 5 buttons. 5^10 combinations is about 23 bits. in any case, 10^10 is still only about 33 bits. i wonder what mode of operation it uses (please tell me it's not ECB...)
"Why is the 'lockout time' even relevant? It's DRM. If you have physical access to the device, you can crack it open and access the Flash storage chip directly. That way you will be able to crack any reasonable key with little computation time."
It depends on how the lockout is implemented. But, yes, you're right. In my threat model, I was assuming a non-destructive attack.
It is curious that nobody has asked the most obvious question here:
How do I know that this device is actually encrypting any data?
In most cases the weakest link in the chain is the human holding the ends.
Also do you realy want to walk around with a stick of explosive with the blasting cap already inserted?
Which is why the likes of the NSA have crypto ignition keys which are easy to remove.
The fault of this product is not seperating the cap and the C4.
Arguing about the number of key strokes is like arguing about how many layers of tissue paper you need before you get a 50/50 chance of it exploding when you hit it with a 1/4 pound hammer that it won't explode in your face.
If I really want the data, I will let the user input the code, while lurking in the memory space, and then leech out all content once the data is out in the open.
To me, this is still the failure of those devices; namely, you can't use them on untrusted PC's without fear of leaking all your information.
10 digit numerical passcode? Sounds like if you are in America and you know the owner's phone number, you'll be able to open this right up.
There is a simple way to make it much more secure (I don't know if this particular product implements this or not though). First, you have to make the controller using an internal storage for its firmware, so there is no way to read its internal data. Then, when the user set the password, an internal random 256 bits password is generated from a random source, and written to the internal storage. Then, when doing encryption or decryption, the internal password is XORed with the user password. This way, even if you have physical access to the flash chips (which is not difficult), you still can't brute force the password even if the user entered a relatively short password, without the internal password in the controller (which should be completely invisible outside of the chip).
Of course, it may be possible to probe the controller to get the internal controller. I don't know enough about flash storage to know how difficult this is.
"Of course, it may be possible to probe the controller to get the internal controller."
"Of course, it may be possible to probe the controller to get the internal password."
Sorry for the typo :P
At least it looks good. How long till the paint comes of the key's you use the most?
There are only 5 input buttons. They are labeled in pairs because humans often memorize numbers that need all 10 digits.
With only 5 buttons, you'd need 110 digits, not 77. The device only allows pins of 4-10 digits, however; assuming you were dutiful and used all ten the keyspace there is ~9.8M.
Assuming you weren't, a 4-digit PIN has a keyspace of 625, which, given 2-minute lockouts after 3 attempts, can be traversed in 7 hours.
This is really excellent kid sister encryption; it worries me, however, that a government official might think this device secure enough to get even more careless with it.
I am not a hardware engineer, nor do I play one on TV, but: even if the encryption happens at the flash level, and even if the password key is long enough to prevent brute-force attacks given the timeouts, there must be at least a passing chance that the lockout machinery has an analogue component (so, to prevent lock timeouts, just pull a reset pin high, or similar).
Exactly. This is why I never really understood the point to these types of devices. The only thing I can really think of is that since the user is inputting their password directly with the hardware, special software on the host computer isn't needed. Worthless on untrusted computers, but convienent on trusted ones.
I would think this could be beneficial if used in conjunction with software crypto.
Providing you don't destroy the case, you'd have to get the key entered into the keypad right to begin attacking from a software perspective.
How its set up is a benefit to casual use, but this could be used to deter (annoy) more determined intruders. Would make it tougher to copy the device so they can begin a software attack against the copy.
@ Ping-Che Chen,
"... ... Then, when doing encryption or decryption, the internal password is XORed with the user password."
NO No no.
You where doing OK up untill then.
You are making the "looks like an OTP" mistake.
Although not impossible to stop reading the internal contents of a chip can be made difficult. BUT if you have the level of difficulty linearly on the X axis and the cost on the Y axis you will find a good aproximation to an exponential increasing curve.
Thus you could assume that the internal secret becomes available in the length of time it takes to disolve the chip plastic (which with the right solvents is easily measurable to a couple of lock out times) and the length of time it takes to mount it up in an apropriate microscope/probe test jig (which can and have be bought second hand on even megre university project funds).
XOR is not the way to go you have AES on chip why not use it in a variant of the Unix Password system to bump the real work factor up. Oh and make sure the varient includes the number of times through AES as well as a salt to stop Rainbow table trade offs.
I think that what this thing really needs is complete data wipe (cryptographic erasure) after say 10-20 failed attempts. Just erase the password encrypted key and you have deleted all the data, at least assuming that the 'bad guys' haven't already copied that memory.
Seriously, this could be good enough for any secrets that won't be cracked by a crack team of security/hardware experts. If they are just going to try to get in through the front door (password guessing), keeping them out is easy. If they are going to compromise the hardware, well, they probably can find easier ways in than that.
"I wonder what mode of operation it uses (please tell me it's not ECB...)"
The chances are even if it's not it might as well be.
There are a whole load of issues with encrypting a Data Base which essentialy a file system is (ie it is a keyed flat DB with fixed length records etc).
You then need to add in the constraints that a flash file system adds (ie wear leveling and record erasing) and basically very few people get it even close to being right.
It's the sort of stuff PhD disertations where made of just a few years ago.
And I suspect if you asked Bruce nicely he would probably put his hand up to saying what some of the issues are but also admit to not knowing them all off the top of his head (unless he's been building on recently).
I have slightly painfull memories of having conservativly designed an HD encryption system and being somewhat proud of myself for having covered all the bases, then having it ripped apart in minutes by somebody in the know with tricks I'd not heard of at the time.
One of the tricks actually capatalised on the conservative nature of the design (ie in strengthening in one direction I'd inadvertantly weakend it in another).
Luckily the way I'd designed the system it only took the work of a few minutes to identify and make most of the changes, then a days slog writing a new sub module, then into the long formal test stage...
If someone's going to try to brute force it, they're probably going to break the casing and interface directly with the electronics. Assuming this is the case, I'm guessing that power cycling the device every few attempts might be enough to circumvent the 2 minute lockout. This of course assumes they took a naive approach to implementing the timer.
If the encrypted key is stored with the data, i.e. in the first sector, then interfacing directly with the storage chip(s) might also get around a lockout.
It'll probably keep your data safe if you lose it and some random person picks it up, but I wouldn't trust it to protect me from someone who seriously wants my data.
Why not reverse engineer the software/hardware, extract the memory chips and brute force those? Any government can do that easily and the cost would not be astounding for non-government actors either. Max PIN is 10 digit, trivial to brute force and from the look of it, we're talking 5 values not 10 per digits.
I somehow doubt it's tamper-resistant enough to mitigate that.
Bruce, I assume manufacturers read your blog to tease whatever information they can out of it. How else would they FINALLY have a USB device which cannot be broken through software (or, more accurately, has apparently been designed with that goal in mind). With all of the key logger type attacks which scare people as much as a pair of flaming underpants, they seem to be going in the right direction.
I was actually impressed that it appears they are trying to actually improve on the totally broken chips you've put in the dog house.
Do you have any recommendations for any designers reading your blog on how to avoid issues with trusting the computer with your PIN/password and simultaneously supporting enough bits of entropy? Or, if none, a recommendation on which of those two goals they should focus on instead?
@Andrew Suffield "How do I know that this device is actually encrypting any data?"
That's not the most obvious or relevant question to my mind. @Daniel got to it with his observation
@Daniel " if you lose it and some random person picks it up"
The first question to ask is "What degree of protection does this device offer my information". What's the metric?
This is what Bruce and his fellow cypherpunks are getting at. The lower the entropy the less the protection. But all Corsair says is "suited for security-minded consumers and professionals" which could be construed as a promise to the CIA or "AES" which is a method not ameasure. What protection profile did they adopt (if any) during design. What evaluation did they submit it to? They say you can put your sensitive data on it but that word covers a host of sins (my business plans maybe, the noc list for all my undercover agents in Columbia probably not). Managers and deciders are being told to be standards compliant (FIPS140-1, AES etc) so they hear AES and argue "that's good enuf."
GSA, though, certifies manufacturers security containers before they are made avalaible on schedule for sale to the Gov't. The certification is as a function on how much time it takes to break into the safe. Can't we expect/should we demand the same from "security" manafacturers? If I'm a manager and you tell me that the Corsair will standup to a determined attack for XX days or XX minutes I'll be able to judge how suitable it is for my sensitive data.
It is interesting to note that Corsairs were French privateers.
Corsair's warranty promises "Corsair hardware product is guaranteed to operate, as specified by its datasheet, and in the operating environment for which it was intended, for the life of the product or the extent of the warranty." Does this imply they are liable for implementation details that give up the key or data? If it is they limit themselves to the cost of the device or it's replacement so I don't take their promise too seriously.
I would like to run forensics on the stick after the data was cleared with their 911 method.
You know that fast voice at the end of the commercial? The one that warns about undisclosed costs, milage and side effects...it said "PadLocks are developed and produced by Corsair Memory based on licensed technology (patents pending) from ClevX, LLC. " ClevX, Clever eXtensions, is an IP collection company few have heard of that have like all the patents in world. If they are who I think they are ...they they are smart people (may actually have invented a simple low cost way to deflate the power of a hurricane) and they cite a lot of these keypad sticks to other companies.
Can we get a white board and attack tree diagraming app built into the blog? Would it help? It would help me.
"Do you have any recommendations for any designers reading your blog on how to avoid issues with trusting the computer with your PIN/password and simultaneously supporting enough bits of entropy? Or, if none, a recommendation on which of those two goals they should focus on instead?"
What's the guessing Bruce will direct you in the direction of a book shop in a few days time...
I think this is of more use in a more traditional environment:
Embassador travels home, gets new key, learns it, writes it down and puts it in safe once back in embassy and then you can send over usb sticks any old how.
The advantage is that you don't have to fear hacking. At the embassy you have a PC with USB, a printer, monitor and keyboard and no network connection anywhere in sight.
Substitute embassy with any high value enterprise and you've got customers enough.
Add a keypad a second factor to standard password entering software, and it would be good.
And, perhaps, burning the data out using an internal battery after 10 failed attempts would be good to. Flash drives, encrypted or not, shouldn't be the only copy of your data anyhow.
@ BF Skinner,
I had a look at your ClevX link and found amongst other things the following white paper,
And to be quite honest I don't like it one little bit.
It appears in practice that DataLock is very little to do with the encryption on the Flash Drive.
From the blurb (if it can be believed) it is actually technology that hides the flash drive from the host computer untill the pin is entered.
That is it plays around with the USB spec to make the device look like a mobile phone charger or equivalent or an ordinary USB Mass Storage Class Device.
Worse the device has an internal battery and the user can sit there and enter in pins without it being connected to a USB host...
What gives me grave misgivings is they define it as being suitable as a "two factor authentication" device...
1st factor, is having the device.
2nd factor, is knowing the pin.
I will let others make judgment on this but as far as I'm concerned that is not two factor authentication in the traditional sense (ie you don't include the data storage device as one factor).
The question then arises just how does DataLock interface to the encrypted flash drive to unlock it?
Could be as simple as just enabaling the USB interface and raising a single "good to go" status bit/line?
Oh and apparently "Black Box" use the technology in their Take range which includes a charming little pink number called secret diary which is OK for a teenager to hide her thoughts...
"1st factor, is having the device.
2nd factor, is knowing the pin."
Not the worst definition I've heard
(1st Factor: Password_A, 2nd Factor: Password_B) but it ranks.
Meh on the number of bits of entropy. I'd be much more worried about the classic "brute" force attack: I'll cut off this thumb if you don't give me the combination to this other "thumb".
Not a new idea: http://bit.ly/ahL214
@ EH and others thinking about designing a secure USB mass storage device.
A little advice.
1, Put the key on a seperate device that plugs into the memory storage device (ie an ignition key based around a secure smart card or SIM chip). This holds the AES keys and possibly the hardware crypto engine as well.
2, Use a 10-40 digtit PIN for the ignition key (either direct or through the mass storage device. Which unlocks the secure smart card or SIM.
3, Use PK to comunicate managment info within the various bits of the system.
That is the ignition key has a private key and a public key (likewise the mass storage device if it contains the crypto engine). Pulling the ignition key kills the encryption engine.
The easiest way to build such a device is to have the mass storage device have "USB On The Go" in it where it acts like a USB Mass storage device to the computer on the USB (B socket or A plug) interface and like a simple host on it's (USB A socket) interface.
The Ignition key acts as a USB device only. You can buy of the shelf chips that act as USB Devices that interface directly to Smart Cards or SIMs. So the design of this is a bit of a no brainer. You can get Secure Smart Cards / SIMs at ELA 4 and above quite easily and they have the crypto engine built in as well as the PK certs and managment etc.
You can now get single chip microcontrolers with two or more USB interfaces and interfaces to Flash devices or SD/MMC cards etc.
With a little skill and thought you could probably have a demo unit up in a month as nearly all the software you need to do the heavy lift is available from the chip manufactures. Therefore you are mainly writing a "script" to glue it all together.
Ok, so we have a encrypted 8GB USB drive with a trusted path (hallelujah!), AES-256, a lockout feature and 4-10 digit PIN for authorization for the low price of $60! I have to disagree with some of the comments: that's freakin' awesome! That is... if regular consumers are using it to prevent opportunistic or non-technical attacks. Not government, high value assets, sophisticated attackers, etc. To be clear on my stance: good for low assurance situations, nothing more. The trusted path makes it worth the money: most of the "better" encrypted drives retrieve the password through the main operating system of a PC. There are many more attack vectors on a laptop or desktop than a flash drive. So, the system is more specific, the TCB smaller, and the protection good enough to deter non-technical or opportunistic attackers. It's also cheap. That's nice.
There is one particular advantage that drives with PIN or trusted path have over PC-based drives: it should be harder to write software that subverts them. I could see things like SanDisk being subverted by ignorant script kiddies thanks to well-written kernel-level tools that intercept passwords, but attacks on this device are likely to require physical possession and a certain amount of hardware knowledge. If you ask me, this is an improvement in the world of COTS crypto bulls***.
It will be hard to build something better that's not a kludge. Clive and I were working out the details in the past. It basically takes an inline-media encryptor situated between the main system and the storage medium. Like the NSA's IME, it must require two-factor authentication, key/buffer zeroize function, and covert channel reduction. COTS solution might use separation kernel, truecrypt-derived filesystem encryption, and RAM overwriting on shutdown. One application is an embedded HD enclosure that IME's arbitrary HD's. Most recent incarnation uses VIA Artigo board for cheap, low watt, crypto-accelerated action. Still takes up plenty of space.
I've been wondering how to make it as small as a typical external HD. I think it could be done using an embedded board whose tiny processor has TRNG and crypto acceleration. Certain FreeScale PPC chips and, for higher security, General Dynamics INFOSEC chip come to mind. I don't know how to maintain security & price if I apply this approach to a flash drive. I think that would require a more expensive, custom approach. We are better off securing small, portable hard disk enclosures with built-in encryption. What do the rest of you think? Is that the most cost-effective approach for implementing an open-source, COTS media encryptor?
No it isn't the theoretical most secure device imaginable.
However - if you are a local council, hospital or company with workers who occasionally need to copy data between machines - this is a very good choice.
You don't need truecrypt installed, don't have any issue with copying between OSes or on machines you don't have admin rights on. It is secure enough (assuming they have done the HW correctly) not to worry when someone drops it in the pub.
It is available in the shops - you don't need to own your own airforce to buy one - and it doesn't cost much more than any decent brand memory key.
I think, like everything else, whether or not this is secure enough depends on the information it is protecting.
It would be effective against casual loss and snoopoing, and a deterent against evesdropping. It would also take a pretty patient person to do a brute force.
On the flip side, it is insufficient to protect information that is highly sensitive.
Of course, IMHO, if the information is so sensitive that the protection of this device is insufficient, it probably shouldn't be on a USB in someone's pocket anyway. Secure storage and mobile storage are often mutuall exclusive.
Good point about security cost being relative to the value of the information. There are a few points I disagree with. First, an encrypted external drive isn't a deterrent against eavesdropping: it promotes it by making eavesdropping more productive, either physically of the individual or digitally of the PC processing the sensitive data.
Additionally, we don't know enough about how the device works internally to gauge the work involved in brute force or even if brute force is the most cost-effective attack. As Ross Anderson points out on side channel attacks, his *undergraduate* students routinely evince secrets from "secure" chips. They also use equipment that isn't outrageously expensive or rare. The likelihood of this attack depends mainly on the value of the information, as the attackers and equipment are out there for the buying. How much would one have to pay an unscrupulous geek in a university to get the PIN with a side channel attack?
"information... so sensitive that... device is insufficient, it probably shouldn't be on a USB in someone's pocket anyway. Secure storage and mobile storage are often mutuall exclusive." (HJohn)
The last point I'm only partially in disagreement with. Generally speaking, most tiny devices aren't secure or aren't cost-effective to secure. I mentioned something similar in the last post. However, one can have very portable security. For instance, General Dynamics TACLANE line of communications products provide excellent, military-grade security and are pretty portable. I mean, the Edge smartphone is huge by commercial standards, but it's still very portable and usable. A netbook with trusted boot, a separation kernel and good architecture gives ease of use with excellent security in a tiny, energy-efficient package. Green Hill's built one on INTEGRITY, in fact. Another company built a medium assurance VPN solution from an inexpensive, Ethernet-enabled PCMCIA card with OpenBSD. Mobility and security are often at odds, but they aren't as mutually exclusive as many think. They can be mixed together in many useful scenarios with careful engineering. It's been done over and over. Companies just need to do it more often. ;)
I just ran into a similar issue recently. In order to get government certification, we had to have our passwords encrypted with AES. I really wanted to grab whoever wrote it and try and get it through their skull that the passwords are usually only 8 characters long!
See this device:
And I agree with the rest of the commenters - "complete implementation failure" is offbase. And even if it was an implementation failure, at least this is a step in the right direction. Companies are starting to come out with products with genuine security, don't kick them in the nuts.
Is anyone interested in a side bet on which attack vector will crack this puppy quickest?
I'll place my bet on differential power analysis, identifying critical code sections and simple Timing glitched power attacks completing the job.
BTW lock-outs after 3 or more tries only stop a hacker if the device has an internal battery back-up or some form of internal energy storage. I'll bet this USB stick has neither.
"Exactly. This is why I never really understood the point to these types of devices. The only thing I can really think of is that since the user is inputting their password directly with the hardware, special software on the host computer isn't needed. Worthless on untrusted computers, but convienent on trusted ones."
When was the last time you used a trusted computer? Do you work for the NSA or something?
"BTW lock-outs after 3 or more tries only stop a hacker if the device has an internal battery back-up or some form of internal energy storage. I'll bet this USB stick has neither."
Did you read my post above where I give a link to the ClevX White Paper?
The device has a battery and you can enter the pin whilst it is not connected to anything. As long as you plug it into a USB Host within 15 secs it looks just like any old USB Mass Storage device.
The scary bit as I pointed out is the "lock" part appears to simply turn the USB interface from Mass Storage to something like USB power charger, so the host does not see it as a valid USB device any more.
It talks about being a replacment for "bio-metric" locked USB devices etc. It does not mention encryption at all.
Which is why I put forward the thought about how it activates the encrypted flash storage behind it...
If it's a "good to go" signal on a PCB track, then it's most definatly game over before it starts.
"Companies are starting to come out with products with genuine security, don't kick them in the nuts."
I'm not sure that this one qualifies on "genuine security".
If you read the ClevX White Paper I posted a link to above you will find it mentions SabOx, and how it's technology (at a very poor stretch of the imagination) meet's the "two factor" requirment...
There is a trend in the industry to make products to meet the check box list faux security requirments of SabOx auditors, not knowledgable evaluation by security advisors.
It's kind of like the difference between "crystal healing" and "modern surgery", the difference is not that important to you untill you get a burst appendix.
With this product, the more I read the less I like what I'm reading.
The advertising blurb says it is full AES256 encryption of the Data on the Flash chip.
The White paper suggests it is just an Enable "lock-out" protection, probably implemented on a cheap flash based microcontroller like a PIC or 8051.
If it is the later than this is a truly pathetic attempt at data security. It might stop script kiddies but that's about all it would stop.
Unfortunately, I'm also finding that Security concerns have created a whole industry stamping and certifying what they barely understand. I had to talk with an auditor asking about SabOx security certification just the other day, my what a painful experience!
"When was the last time you used a trusted computer? Do you work for the NSA or something?"
I work on a trusted computer most of the time, because I'm investigating the design of such a beast designed around COST minimal cost components in a new format with much finer granularity than existing systems.
However because it's a work in progress it cannot be tursted to some peoples definition of the expression "trusted computer" and I would argue that their definition is wrong any way.
Which highlights the issue of what do we mean by "trusted computer" any way, and in what "time frame"?
As the old joke has it you can trust a computer that,
"Is embeded in a block of concreate at the bottom of the Mariana Trench"
You can tell it's an old joke because NASA amongst others are going there. ( http://en.wikipedia.org/wiki/Mariana_Trench )
So the joke needs a "maybe" update added to it.
That's the trouble with security it gets harder not easier as technology improves.
"Unfortunately, I'm also finding that Security concerns have created a whole industry stamping and certifying what they barely understand."
I'm not sure it's actually "Security Concerns" I'm comming around to the opinion it's "Liability Concerns".
That is somebody on the board says "What's our liability under..." and from that point onwards the focus is not on security but liability mitigation under whatever "business rule" they are most concerned about.
With regards to,
"I had to talk with an auditor asking about SabOx security certification just the other day, my what a painful experience!"
Yes driving six inch nails into a concreat block with your forehead can be a less painfull experiance.
And it's not just SabOx it's the Payment Card Industries (PCI) nonsense as well (as you might find out shortly depending on who you work for ;)
Security by penalty avoidence is not security, never was and it never will be. All that happens is that risk gets externalised and like a hot potato gets passed down the line, at each point people mitigate a singular risk in a specific way and pass it on.
Sitting on the side of this downward spiral are auditors who are making money by certifing and organisation meets a particular requirment.
And this is where a conflict of interest arises.
The auditor is payed by a company to certify if it mights the narrow requirments of liability X mitigation.
The auditor walks in the door takes one look around the place and thinks "OMG this is a pigs breakfast". They have two choices at this point
If they tell the client "your security is a pigs breakfast" they know in all likley hood they will be shown the door etc.
If they say nothing and do a general audit then the Company will not get the certificate so the auditor will be effectivly shown the door etc.
If they shut their eyes and walk a very very narrow pathway down a checklist for one singular risk then they can supply a certificate.
In the latter case the Auditor gets paid the company gets it's certificate that alows it to carry on doing business. And the chances are the auditor will get more business.
The only problem for the auditor is when something goes wrong (and it will) will a judge accept that the auditor issued the certificatly "honestly". And to do this the Lawyers will pull out the contracts, rules and audit procedure and go through it with an electron microscope looking for "liability" they will then argue the case in front of the judge and all sorts of "legal profitering" tactics will be played such as "electronic discovery".
At some point either bankruptcy or a judgment will put a final end to the issue.
However my betting is bankruptcy of little organisations and "out of court" for the big boys is what will happen.
It is the almost inevitable consiquence of "self regulation" in a monopolistic market, where liability can be externalised.
"And it's not just SabOx it's the Payment Card Industries (PCI) nonsense as well"
I agree with this statement. "Common Criteria" hardware certification is a bit of a joke.
The reality is that VERY VERY few people are "properly" qualified to do this, especially in the area of secure chips.
Unfortunately, for the certifying company, it is logical to pick the least qualified certifier.
There are 3 reasons for this
1) It just paperwork anyway attitude...
2) Anyone really qualified probably works for a competitor or will work for them sometime soon. So don't tell them your latest secrets...
3) worst still they are the enemy (a hardware hacker)
So reality is that I want to reveal as little as possible and get the stamp.
@Nick P: "First, an encrypted external drive isn't a deterrent against eavesdropping: it promotes it by making eavesdropping more productive, either physically of the individual or digitally of the PC processing the sensitive data."
I appreciate your courteous response and agree with most of what you wrote. In regards to the above quote, when I say "deterent to evesdropping", what I mean is one cannot log keystrokes to obtain a key on such devices. I probably used terminology that drew another conclusion, and I appreciate you calling me on it.
This definitely improves security for casual users. Assuming the implementation details aren't obviously flawed (i.e. no password stored in cleartext, etc) then the attacks that are still feasible in my mind would be:
1. Steal the device, take it back to the manufacturer or your lab, connect it to a JTAG debugger, drop new software on it that allows unlimited brute-force attempts by USB. Brute-force the likely ~25-bit key quickly using a normal PC.
This assumes that the anti-debug feature has not been set on the microcontroller. If it has, you could possibly still replace the chip or use hardware tampering techniques to repair the on-chip fuse. In any case, attacking the hardware adds time and expense.
2. In the real world, this thing still needs to be attached to a PC to be used. So the likely attack is going to be installing some malware on the PC to simply copy the stick's contents the next time the user logs in.
I view this situation as somewhat of a "two form factor" authentication ... something you have (the USB stick) and something you know (the pass code). So if a system has a key-logger to capture your code the perpetrator would still need to steal your USB stick. I used an Ironkey device and for the pass phrase always try to use upper and lower case with digits and special characters (maybe 52 variations); but even with that you would still need 42 characters to match the 256 bit entropy. I'm only using 14 characters right now ... fairly easy to remember but the "secret" data I have isn't worth much more effort!
If the device has a battery, couldn't you just disconnect it to avoid the lock-out?
Thanks for the clarification. ;)
Well, it's a step in the right direction. The real problem is that companies are marketing these devices like they provide real protection against more than casual or amateur attackers. Common Criteria is also misleading and part of the problem. So many companies say "Certified as secure at EAL4+ of the Common Criteria" and businesses say, "Hey, the government tested and certified them, so they are good." They neglect to mention that CC is mainly about the development process and never tell people EAL4 is designed to protect against "inadvertant or casual attempts to breach security." Inadvertant isn't a fitting description of most attacks on crypto. ;)
I'd like to see more devices with strong tamper-resistance or strong design that eliminates a need for it. For instance, tamper-evident hard drive enclosures that derive the key from an onboard secret, a user PIN, and a physical token like a crypto-key. This mimics the NSA design. The buffers would use fast RAM and all information flow tracked. When the drive is stopped, for any reason, the key and all that data are quickly overwritten. I often promote separation kernels for high assurance because MILS is ideal for this. For red-black separation, label each partition red, black, or MLS. The security policy is two-level: red & black can't talk, but they can both talk to MLS; capability-based scheme to further allow very specific interactions between MLS & non-MLS protocols. The MLS components and their interactions with others must be verified, but any other software component can fail without violating Red-Black security.
Designing an external hard drive with no remote attacks and few feasible software attacks is damn near trivial using best practices developed over 50 years. Defeating hardware attacks is far from trivial, but most "secure" drives can't even solve the software problem. Companies could do better and it wouldn't cost much more. You see, Sam, most readers of this blog know that most of the problems in encryption products are preventable and that the company is just spending a lot more on marketing than engineering. Just a little more money & security greatly improves. Most companies refuse to do this because they don't really care. So, whenever they label a poorly developed product "secure," promote it for critical stuff, then it gets busted, the good folks on this blog tend to show the greedy bastards no mercy. I can't blame them.
"I'd like to see more devices with strong tamper-resistance or strong design that eliminates a need for it. For instance, tamper-evident hard drive enclosures that derive the key from an onboard secret, a user PIN, and a physical token like a crypto-key"
I'm not sure where to weigh in on this wish. Clearly a few cents added for crypto hardware will permanently defeat script kids, however once a competent "hardware hacker" has physical possession of the Hard-drive it is really just a matter of time and budget before he gets at the stored data.
So you need to define what you believe your adversary is technically capable off and willing to pay for the information on the harddrive.
"you need to define what you believe your adversary is technically capable of and willing to pay for the information on the hard drive."
I'm totally with you, here. I'd go further and say "capable of" in general, as it might be a more low-tech solution: shoulder surf the PIN, then later let user fall asleep or "assist" them, then take crypto key. So, we definitely need to weigh the value of the information vs. attack vectors vs. existing countermeasures. Risk management in its purest, there.
The point where I take a detour is your claim that a hardware hacker getting access can get the stored data. It depends on a lot. If the data is encrypted with an onboard key, then sure. If the encryptor requires another device to form the master key, then it must be swapped for an identical, subverted one & a method must exist to get the data off of it later, either covert channel or stealing it again. However, if the crypto-key & drive use public-key crypto built into their hardware & authenticate their traffic, then it might take a really sophisticated attacker to retrieve the inter-chip encryption keys in order to subvert the device. Additionally, if all the attacker got was the hard drive, then they would have nothing. So, depending on attacker's level of skill, there are numerous level's of security provided and each will defeat a particular class of attackers. Believe it or not, the last might cost thousands to attack physically (except if user is attacked), but only cost a few hundred extra to build it.
I have been toying around with a design idea for a tamper-resistant system. The idea is a tiny, trusted embedded computer. It would be small, like a smartphone or PC card. Instead of computers with ports for IO devices/mediums, we would essentially have "docks" for this computer. There would be a notebook, a desktop, a server, etc. They might offer expanded functionality, particularly devices, but this embedded computer would be the root of trust. It would have user's private key, security policy, hash values for stuff on the "docks", highly secure OS, restricted DMA, etc. Such a limited system would be easy to secure from software & pin IO attacks. The other systems would be built on its security. Someone might say: "but we already have TPM's and all that good stuff." Well, they're too big to carry all the time. If we use this approach, you can plug your trusted card in to the hotel computer, use it, and then remove it when your done. You could sleep with it, hide it in a field, whatever. To subvert your critical apps on any of these docks, the trusted system must be subverted. My prototype design is basically an embedded coprocessor board with medium assurance. It hooks up to the main system via Firewire to load the software to memory, regularly check its integrity, and it can be asked by apps to perform sensitive operations. It's a kludge, though. Real hardware designers could do better.
"The point where I take a detour is your claim that a hardware hacker getting access can get the stored data"
Sorry I did not mean to imply that the stored data was unencrypted, only that it is what is on the disk. There are numerous ways to use two key systems to encrypt the data so that decryption requires access to information that is simply not present on the disk.
Unfortunately most of these protocols also defeat the intended purpose of the device namely as a portable data storage / inter machine data transport device.
When you physically split the key into multiple portions human behavior is also uniquely optimized to defeat your best efforts. You will often find that the software key portion gets written plaintext on the device. (Users Logic being it is useless because it is only half the problem, so I'll make it easier to use/ remember) If you add a second device say RFID secure smartcard, to hold the second key, than it will be taped to the hard-drive (rather than being kept separate on a key chain). It's always human nature that's the weakest link.
Whats interesting is that PUF's (as discussed elsewhere) are actually one of the best ways to generate Keys for something like a HardDrive. The reason PUF's are very valuable here is that the PUF never needs to be revealed outside of the drive.
Alright, I'm seeing your real point now. So, you're concerned that the human factor, particularly convenience, reduces the security margin of the medium assurance techniques? It can indeed. (sighs) I think it's incorrect to assume that users will always do these things. I've actually seen it done less than half the time among lay users. In many cases where it was done, it was insignificant: user who trusts others in the building, but not hackers, posts password on monitor. That's not good, but the password security against remote attackers will hold. So, the question is how do we deal with that?
I have to clarify something about my little smartcard-like device. It's not intended to go with just that one product. It will work with many products and solutions. It will be exclusively manufactured in a keychain form factor to increase odds of a user putting it there. It would work with several solutions: hard drive encryption, VPN, user authentication on laptop/desktop, etc.
Users would be educated that, like a credit card or house key, it should be protected at all times. This, coupled with the fact that its an expensive secure drive, should keep most users from doing things like taping the usb key to the drive. It would be easiest to pull off in corporate environments, where policy and strong enforcement could ensure that a certain minimal amount of care is taken all of the time, with extra care taken some of the time. Again, this should improve security in many cases. Robert, I'm not trying to solve the whole problem of INFOSEC here: that's impossible. It's really a bunch of separate problems requiring separate solutions that integrate in a holistic way. These designs just form one or more components of the total solution.
I've read most of these comments. Please tell me what I'm missing.
We're focusing on the tree, not the forest. The question is not "How do I make a USB key safe?"
The question is "How do I store data securely and portably?"
It seems to be agreed that whatever the merits of the device in question, sooner or later it needs to be plugged into another device to allow its decrypted data to be human-readable. For convenience, let us call this other device a "computer".
The question is not whether said "computer" is "trusted" or "trustworthy". The only question is: Has it been compromised? If it has, the whole thing is an exercise in futility.
Right so far?
So we *must* assume that it has not, for any of this to be meaningful. Therefore, regardless of whether the computer is trustworthy, you must choose to trust it to proceed. Right?
OK. Now that we've decided that this particular box will be trusted, let us use TrueCrypt or something similar to encrypt our sensitive data. (The plausible deniability is a bonus feature that the USB stick does not offer by default.) This allows the use, not of ten digits, but of all keyboard characters, for the password. Even a laptop keyboard has about 90 useful characters. Using only a ten-character password offers about 2^65 possibilities, if my math is any good. Fifteen characters offers about 2^97, same disclaimer. I think we're beyond brute force here, lockout or no. 20 characters gets you almost 2^130.
OK, now use your thumb drive for backing up your letter to Aunt Emily, and burn the TrueCrypt volume to a CD (or DVD, if it's that bulky). No more worries about electronic forensics, popping the lockout pin, dismantling the chip -- any adversary can read freely the laser bites in the disc, but what good does it do them? Their only possibllity is of the rubber-hose genre, and nothing works against that.
I don't know how the various materials used in CDs or DVDs show up on various scanning devices versus the *metal* USB stick in question. Answer, please? -- but getting the medium from point A to point B is a separate issue, to be solved separately. The disk is greater in area, but much thinner, than any USB stick. Work with that. (Sandwich it in between your collection of Michael Jackson CDs.)
Assuming that you carry your chosen, trusted computer with you -- or have it when you receive the CD from a source equally trusted -- you're in. If it's lost or stolen, you of course made backups, and the finder or thief gains nothing.
The USB is the wrong avenue in the first place: Too vulnerable to both front- and side-channel attacks and physical attack. But if you *must* have it for some reason, just encrypt the data with TrueCrypt before writing it to the stick. The USB keypad keeps casual thieves out, and the pros who get into it are probably going to rubber-hose you anyway, so why even bother with the extra expense, and with shouting to the world, "I have super-secret, valuable data on this stick." (Yes, obscurity is not sufficient security, but calling attention to your high-value data is a negative for security.)
Oh, and use TC in Portable Mode on BartPE, to leave no traces on the encrypting and decrypting machines. No need even to install TC on the machine, which might otherwise be a further flag to your adversary.
What am I missing?
@ Tom T.,
"What am I missing?"
How about stoping,
"the pros who get into it are probably going to rubber-hose you anyway"
It is actually not that difficult to do if you know how.
There are ways to share a secret with a number of people and there are ways that a secret can be made to only work the once...
If you have to phone three friends and convince them you are ok and you and not under duress or chemical influance to get parts of the shared secret the chances are if they know you well then they will hand you a duress secret if required.
Now the trick is how to make a system whereby if anyone of the three suspect duress and give you their duress secret it gives you the duress key but only if all three give you the non duress secret do you get the non duress key...
It can be done in a number of ways the question which is best.
After all what you don't know (might hurt you but) you cannot tell.
TrueCrypt is a great product BUT how can you ever be certain that the software encrypt program that you are running has not been compromised or that some key-logger is not installed on the machine that you plug the USB stick into.
Sure the hacker still needs to get physical possession of the USBstick but in the case of a targeted attack he will intentionally first load the Key-logger and than steal the usb-stick.
@ Tom T.
Sure, TrueCrypt to encrypt files and put them on a USB drive. It's definitely a common approach. Here's the problem: as BartPE lacks a full desktop or notebook suite of useful applications, it's not going to be used to work on the stuff in the TrueCrypt volume. Hence, the main OS will wherever it's plugged in. That brings me to your next point: the computer is trusted. There are so many ways to compromise a regular PC, but not as many for a USB drive. The problem is that the PC must be able to protect the password/PIN. Even if malware is there when the secure drive is mounted, it will only have temporary access to the files. If the password, PIN or key is found, the access is permanent and the PIN or password might be used to gain access to other machines. Unless you have an OS worthy of being "trusted," it shouldn't be. Modern OS's can't even provide a trusted path, much less trusted authentication to devices.
This is the most basic issues. The other issues to worry about involve attacks on the platform. They might attack the BIOS, they might do a cold boot attack (or the recent warm version), they might rootkit your computer with an evil hypervisor, they might rootkit the OS itself, or create a bogus application to simulate the device PIN entry system. Too many ways for it to go wrong. By contrast, a USB drive with a security kernel, encryption, and trusted path fairs far better. It's also easier to do with a hard drive form factor than thumb drive form factor.
As for your opening statement, it would seem that you are also focusing on a tree. You set out to build a secure, portable way to store data and your answer is to start by trusting OS's that have had hundreds of vulnerabilities and run millions of lines in kernel mode? Focusing on the forest means building a secure system as a whole. To do this, one first starts with a secure foundation. This is another reason why it's easier to secure the USB drives: they are specific purpose, require little software, and hence can be made in minimalist, secure ways. The alternative is doing it your way on a desktop built on top of a verified platform: INTEGRITY; LynxSecure; GEMSOS. Although, you kind of have to code it all yourself to ensure secure integration of various components. Just remember to open-source it and share it with us if you decide it's worth the effort. ;)
Clive, thanks for the rubber-hose solution. Will keep that in mind.
Robert, please see my opening. The USB has to be plugged into something, as you said, and if that something is compromised, *nothing* will save you. Hence you *must* trust *something*, rightly or wrongly, or you go nowhere.
Nick P: OK, but you're taking the issue far beyond the OP, which dealt with selling ordinary consumers and businesses an inexpensive way to (presumably) store portable data securely. I tried to stay within those constraints, and suggested that for that market (98+%?), the encrypted CD might be better.
Of course there are a zillion ways to compromise any (common) thing. See comment to Robert.
How is your "secure" USB to be read? Does it include its own reader, or does it plug in "somewhere?
I think I might not have been clear on the BartPE approach. You never boot your OS. You boot the Bart PE that you made from a known, uncorrupted Win disc, *while not connected to the Internet*. You run TrueCrypt in Portable Mode, which leaves no traces in the OS, registry, whatever. Please see http://www.truecrypt.org/docs/?...
and http://www.truecrypt.org/faq (search "traces") for info on running TC just enough to do your work, then shut down. (Disclosure: I have no connection whatsoever with TrueCrypt.)
The rest of your attacks -- cold boot, etc. -- seem equally possible with any USB stick. Note that I did not suggest putting the TC volume on the USB stick, except if you *must* use USB for some reason. I suggested the CD/DVD, for reasons stated in my first post.
"Although, you kind of have to code it all yourself to ensure secure integration of various components. Just remember to open-source it and share it with us if you decide it's worth the effort. ;)"
Thanks; I'll leave that to others. Which takes me back to my point: What is the best way for most people to use COTS or open-source OTS components to store and *move* the data around securely? I believe that my suggestion has advantages over the product in question, for multiple reasons. **Not perfect, but better than the ten-digit USB stick advertised.** That was my bottom line.
Thanks to all for replies.
I know Bruce hasn't the time for the likes of this, but I surely wish he could weigh in on the pros and cons of this approach vs. the product advertised.
Clive, I've been reading a number of previous post where you have discussed RFID "side channel attacks", clearly things like the TRNG are very susceptible to such attacks, however it seems to me that it might be possible to even program / reprogram the internal Flash / EEPROM memory directly, using only an appropriate RF modulated field and a synchronized Timing attack.
The reason this attack might work relates to the manner in which the on chip power is regulated on most RFID chips, this is regulation to reduce unintentional backscatter.
If you know of any such attacks, than I'm very interested in the details.
BTW: An attack like this completely bypasses the program execution and crypto system.
Well, your constraint that you've mentioned explains part of my post: my solution is designed to provide *real* protection against *real* threats, not a security theater show for the layman that will buy anything. Also, that kind of layman isn't 98% of this market: we're talking about somewhat inconvenient, more expensive drives here. That's actually quite a smaller market. The people willing to buy this flash drive may be willing to buy a portable, external HD with real security. But, moving on to your real question: "what about the BartPE/TrueCrypt solution for most lay persons?"
I know about BartPE. The reason I said what I said is because of how the layman (average, 98+%) thinks. Well, that solution isn't nice for the layman. The lay man wants convenience and many who know about TrueCrypt or PGP and their ease of use are already buying some cheap thumb drive with a PIN. Now you show up talking about saving their documents, shutting down their computer, booting some CD, plugging in a device, maybe waiting for driver installation (again!), authenticating on it, loading up truecrypt, typing in some long/good password, working on stuff, saving stuff, closing truecrypt (taskbar too, for next part), "safely removing" thumb drive, and restarting computer, reentering password in the process. Wow! It's that simple?!
The geek class has already had a simple & somewhat secure solution to malware problems: keep an up-to-date, read-only, RAM-only, locked down Linux distro on a CD-ROM. Also, have your banks' public key written down. Boot up off the CDROM, connect to the bank, do your stuff, and get off. A separate computer for critical stuff that never goes online otherwise is also nice. Even the geeks usually don't do this stuff, security geeks included. The steps involved in your solution are quite involved for the layperson who hears the marketing material for this thumb drive and is like "well, why don't I just do this? it's easier!" Perhaps, they see some SanDisk encrypted thumb drive and it's FIPS certification and think, "If the government trusts this thing, then so will I!" Don't get me wrong: your idea definitely has merit on technical and security grounds. As a matter of fact, I use a similar solution with Ubuntu & Incognito (overwrites RAM after use) in prototype private viewer designs.
I just don't see damn near any of 98+% of the market doing your complicated solution every day. They'd rather just buy something they can type some crap into and plug in: maybe my ultra-secure external hard drive that's somewhat expensive, but most likely a really cheap encrypted thumb drive with little to no security. Convenience trumps tedious security. It's just how the market works, Tom. That was my point.
@ Clive: That which I do not know cannot be beaten out of me, but I can still be killed or, more likely, maimed, for refusing to reveal it if my adversary cannot be convinced that I don't know it.
Nick P, thanks for your continued courteous reply. I *did* fail to state my target market and constraints in my OP, and apologize for the suggestion that my solution was "perfect".
I'm looking, first:
at those who already trust their boxes, rightly or wrongly, and think the advertised product is "the answer". Many comments here suggest that it is of dubious value, even on a "safe" platform. Assuming said trusted box, it is not at all hard for a non-tech user to use TrueCrypt in normal mode; their step-by-step screenshots are excellent. The CD was suggested as being not subject to the physical destruction, electronic attacks, etc. of the USB, but instead relying on the encryption of TC. To me, the USB stick is the "security theater" -- a much more false sense of security.
I secondly went into the Bart thing to try to forestall and address the inevitable comments that no OS or puter is trustworthy. It's definitely a step up the tech scale. I realize class (1) won't do this, but I still think they're better off my way than the advertised way. Your opinion on that specific comparison is welcome.
I agree thoroughly that most users will buy the SanDisk FIPS, or this one, and think they're done. "We" need to disabuse as many as possible of this idea.
As you say, the geeks already have much more secure platforms, so I should have left that Bart part out (but would have been attacked on that grounds, LOL). For that portion of the masses who follow reasonable safe hex, and are not ultra-high-value targets, I think that encrypting your files with TC is quick, once you get used to it, and adds safety to either any USB stick or the CD -- and eliminates the need for the "security theater" costly USB gizmo.
And as mentioned, a USB stick with those buttons screams to the world "I have valuable data worth stealing", while a blank CD -- you can make your own label. "My visit to Grandma's house for Christmas". Every little 'bit' helps (ha ha)
For those already compromised, we can't help them at this point.
For known ultra-high-value targets, they're going to have to move strongly in your direction, and hopefully choose the right consultant, of which there are few.
I think we're not as far apart as it may have seemed, and that was my fault for not iterating the above distinctions. The post was already amply long, IMHO. I've enjoyed our discussion, and if you have any further comments, I'd welcome them. I'm just trying to come up with something better than the advertised product, that is still within the skill level of most users (and in the case of TC, free.) Thanks again.
p. s. I trust Password Safe, which Bruce fathered (or at least, donated the sperm lol) for those long, safe passwords. Again assuming no keylogger or any other malware in my machine, am I not safe? -- makes using crypto-strength passwords a breeze, and also free. So the TC process is sped up by that, also.
@ Tom T
Your point about the dubious nature of the advertised product seems right on. I also agree that TrueCrypt is usable, with a little training, for non-technical users. Some will need more training than others, but I'm sure most can learn to use it. The big question is: will they go through that trouble every time they want to access something confidential? I'm doubting it. Also, the use of a TC-encrypted CD medium instead of the USB device simply shifts the angle of attack from a simple, very secure device class to a complex, vulnerable-as-hell device class. If the BIOS/firmware/hypervisor-layer is attacked, then the CD-ROM doesn't ensure a trusted initial state. See the Blue Pill attack for an example. Now, if this isn't the case, then BartPE loads and the system is in the state it was on the disc, hopefully uninfected.
So, what can happen next? A BartPE system with certain apps will have any vulnerability the same regular Windows system would. If it has Internet connectivity, it can be hit that way. If a user surfs the Web while working on confidential documents, a simple drive-by-download or spoofing attack can get the key. If they leave for a minute while BartPE loads, an attacker could do a firewire-based attack before they get back. The thing that all of these attacks have in common is that they rely on the weaknesses of the platform and that they all get the key. At that point, the attacker can have permanent, unrestricted access to the data on the medium. A trusted path, like PIN, USB drive wouldn't allow this even if it's in enemy hands. With these points in mind, I'll address your other questions.
Your next point was that the solution you mentioned is still better than the advertised one. I'd agree that the advertised one & most FIPS drives are pure security theater, while your solution protects against *some* threats. So, I'd agree that TrueCrypt+BartPE is better. As for the ultra-high-value target part, you don't have to be one. The real concept is: is the information on your drive worth more than it costs to extract it? If it is, then a corporate spy might try to attack it. If it's on a regular laptop/desktop, he will great it with a smile. If it's some secure USB drive & he has to subvert it too, he will be unhappy with the extra time & money he will spend: reverse engineering; hacking; making it look unmodified; do so before user notices it's missing. Most corporate spies don't get violent, so there is a ton of risk if it's on a secure platform vs a "make yourself at home" OS. ;) The attempts to put the security into a separate device isn't security theater: special-purpose devices are inherently easier (and cheaper) to secure. There's also dedicated, secure coprocessors to help.
So, any targetted attack is better off when the user is using TrueCrypt on a mainstream OS. You don't have to be ultra-high-value to attrack a sophisticated attack: you just have to be worth a few minutes and a firewire cable or one paid-for custom firmware hack. Script kiddies could pull that off with prepackaged tools. I mean the average user trying to protect their tax records might do this, but the computer OS/firmware must be *really* locked down (and have no firewire) before I'd trust it. But, if you are bent on coming up with such a method, a custom Linux distro on read-only USB drives (they exist) with a secure, signed update function & truecrypt could do the job. The hardware would have a protected BIOS, be hard to disassemble, and support Intel vPro in its safest incarnation for protection against rogue devices & modified software. The reason to use a custom Linux distro is that you can build it bottom-up & only put in what you need, reducing attack surface.
Finally, I don't trust Password Safe (or KeePass, for that matter). I like to think of it as a variation of the Lord of the Rings line: "one [keylogged] password to pown them all." Password Safe's design assumes that the underlying system (a) has a trusted path and (b) is free of privileged malware, which also defeat (a). Todays systems are riddled with insecurity at every level, so neither assumption is safe in theory or practice. Just look at the size of the botnets (read: powned computers) these days & how quickly they rebuild them. You can use the password managers, but you are taking great risks. If you value the convenience over security, then that's fine as long as you know you're making the tradeoff. I keep thinking about porting it to an external, secure coprocessor but there's still so many risks. We need a trusted path in the OS... and now the browser, which is a crappy second OS these days.
And your post wasn't too long: it clarified plenty of points, allowing me to focus on what mattered. Besides, Clive's posts are like a DOS attack on my system resources: if I can survive them, then yours should be no trouble. ;) I've also enjoyed our discussion & I hope that, as always, the details in it benefit casual passers-by who have similar questions as you. I also hope, consistent with one of my personal goals, that I've shown you just how important platform-level security is & why we need to replace our current platforms. I've been promoting certain architectures & OS's that do a better job. If you want to look at a small sample, then look at the links in my last post to Winter (near bottom at 2:08pm) here:
Great comments all. However one point made very clear is that the disk itself screams valuable data installed here. Not many have mentioned one major issue, which is traveling with it through airports. Since TSA has I believe the legal right to ask you for your passwords on any device going through the airport, one would assume that they would skip a music CD with a Truecrypt container file, but defiantly ask you for the pin or password to a device that screams important secure data resides here. Also, where is the double blind password implemented on this so called secure device? If it does not exist, then easily one pin number opens it and gives everything inside.
I agree that one could in fact then have everything encrypted in it with a double password, one for garbage files which can easily be given to an airport rep or other rep so asking, and the other for the important data. But a device such as this is screaming to the public when you travel that important data may in fact reside here, instead of on that Music CD where you just so happen to have the real important data.
I think it is worth discussing, since most people would assume rather wrongly that they could in fact travel with such devices through an airport without question, which they could on a random basis, but would have to be willing to allow 10.00 per hour airport reps the ability to scan everything on the drive prior to boarding if they so asked. And if you refuse, they can confiscate said disk, thus you will lose the important data, or at the least they can deny you the ability to board.
Also one more issue. If we are talking about real world applications for such a device, then we must look at the reality behind such devices in say a routine traffic stop. Officer A sees super secure device in your possession. Officer A moves to ask you for the pin so he can look at your data on his laptop, you refuse, thus giving him probable cause based upon the stop to run you in for a further examination. Again you are asked for your pin, you give it up, thus data is no longer secured. If you do not give it up, what comes next can be more costly to you.
On the other point brought up about it being inserted into a non trusted computer, if that computer had malware on it with a logger, as soon as the drive was recognized, data would then no longer be secure. This leads you to the conclusion that the device is meant for never leaving the office or traveling with it or allowing it to leave a or inserted into a non trusted computer, making it a totally useless device for any real world security application.
Sounds more like a marketing device made to give the illusion of security where no security truly exits. If you are the sole holder of this pin, you can be asked - or even in some cases - required by law to give said pin to appropriate authorities, thus rendering any data on said device insecure. As long as this is true, it is often better to use truecrypt and place a container file on a random music CD. When stopped or questioned by the authorities, they see a random set of music CD's instead of super secure devices in your possession. If by any stretch of the imagination you are asked by the authorities to see the music CD, the officer hears music on it, thus allowing him to move onward without further questioning. Same applies in an airport, no secure looking devices present, thus data security is more assured. The less you look like you are carrying sensitive data, the better the chances are you will not be molested or required to divulge said pins.
> making it a totally useless device for any real world security application.
Scenario 1. You need to transport data between two computers you own.
(re: giving up the pin to the authorities)
Scenario 2. Your adversary is not the authorities.
Of course we, the readers of this blog, know that you should add the entered key to the stored (random) key and then decrypt a relatively big block of data, with the final result needing to be a specific value for the decryption key to be correct. In practise however, I'm guessing that the device simply holds your 256 bit key somwhere, and has firmware to refuse to initialize the crypto module if you don't provide the proper pin.
@ Nick P.:
"If it has Internet connectivity, it can be hit that way. If a user surfs the Web while working on confidential documents, a simple drive-by-download or spoofing attack can get the key. If they leave for a minute while BartPE loads, an attacker could do a firewire-based attack before they get back."
Please note that I did specify what seems to be common-sense for the security-conscious: Do not be connected to the Internet when working with data that are sensitive enough to require encryption, whether they are headed to USB or CD or whatever. Leaving a machine unattended is *always* a risk; again, you don't do that when doing sensitive work. (I'd put the laptop in its case and carry it into the restroom with me, if it were a public place/airplane, whatever -- but mostly, try do do it at home.)
I'm well aware of the need for trusted platforms. I don't see the average home user getting one that they can afford; education in safe hex is probably the best that we can hope for. Gov and corp with huge DBs definitely need more than OTS stuff. No argument. GL with your system -- of course you'll let us know when it's ready.
(But how do we know that we can trust *you*? lol. -- open source, vetted by many pairs of eyes, etc. -- same ol', same ol. [wink])
"Password Safe's design assumes that the underlying system (a) has a trusted path and (b) is free of privileged malware, which also defeat (a)."
If those conditions are not met, then are you not hosed regardless of whether you've memorized an epic Norse pass-poem? As soon as you enter it....
"Besides, Clive's posts are like a DOS attack on my system resources: if I can survive them, then yours should be no trouble. ;)" ROFLMAO!!!! (No offense, Clive; you've made some good points. As Bruce so coyly says, "There are many comments, some of them interesting".)
Thank you; I've been saying exactly the same thing. Obscurity isn't security, but it can help. Putting up a neon sign that "I am carrying data worth stealing" definitely hurts. Good points on both TSA and police traffic stops (or other encounters). Pick a band that most cops hate, have the CD case on the back seat with old bags of half-eaten potato chips, etc... Filling it mostly with actual music files, then a hidden-volume TC volume, is cool, since a TC volume is designed to appear as random garbage to any analysis.
@ Roger Wolff: I'm mostly assuming that you are carrying your laptop with you, or else sending the CD to a trusted associate by snailmail, with a return address of MegaMusic, Inc., or some such additional subterfuge -- haven't thought that part through thoroughly yet -- Better -- Send a couple of sweaters in birthday wrapping paper, write "Happy Birthday" in big letters on the outside brown wrapping paper, and put the CD inside, in between the sweaters.
@ EVERYONE: The excellent part of all of this is that it *gets us thinking*. Long before reading Bruce's statement that the best mindset for a security-conscious programmer or whatever is to think like a thief -- "How can I bypass this system" -- I used to amuse myself in airport security lines by visualizing ways in which I could get a banned item through, easily and with high confidence. The type of discussion here helps us all to look at the latest products and solutions, and decide: Snake oil? Genuine benefit? -- or the usual, "tradeoffs", and to determine the relative value of those tradeoffs.
FWIW, I lead a terribly dull life, nothing anyone would want to know about, just the usual few credit cards, etc. And I might want to keep private any records of a discussion with a lover, or of health, family, or other personal matters, even though these are of little value to a thief. (Try to blackmail me? [shrugs] Publish it, I don't care. End of blackmail. If you are in a sensitive corp or gov post, then live your life accordingly, Misters Spitzer, Sanford, etc.)
with a secure flash drive as discussed, what are you all's thoughts on the data files having a maximum lifetime?
What I mean is that the files will be automatically erased if not accessed with the correct pin within some "user defined" time period.
This feature would not stop anyone familiar with the device (that could disable the tamper detection and remove the battery, in time) however it would sure make the police look silly when they claimed some data was on the drive, only to find it completely erased, because they waited too long between accesses.
Actually, it's not a bad idea so long as people remember to reset the clock. The number of expired SSL certificates out there immediately comes to mind. If they forget or don't care to read the manual, then their data both expires & disappears.
If you're into fooling the authorities, look into deniable computing like deniable encryption & deniable filesystems. The filesystems like StegoFS are pretty interesting. If built correctly, someone with a rubber-hose attack can't ever prove you have more data on them. Of course, they can't prove you gave them everything, either. A double edged sword that might be useful for certain people, but for the average person possession of this might be an admission of guilt in today's courts.
I don't understand the point to this type of lock-out protection.. I mean, if I have physical access to the device, I can dump the flash memory, then find out how the actual AES encryption key is generated from the numeric PIN (there must be an algorithm hardcoded somewhere inside the usb key), then decrypt the content of the dump file by bruteforcing over the PIN space in a reasonable amount of time. (nowadays, a 10^10 key space is ridiculous)
I think this is just another case of security-by-obscurity
I don't disagree entirely with what your saying, BUT your attack depends on you accessing the PIN. Without the on chip PIN your attack becomes a brute force attack on AES.
If these guys know what they are doing they will protect that PIN and make sure it is not externally accessible, or even accessible by decapping and probing the chip. If they add some tamper protection than they could have a very secure product.
(see my comment on this being an ideal application for an on chip PUF (Physically unclonable function)
well, no... my attack doesn't require to know the right PIN, only the PIN->AESkey algorithm which of course (as you pointed out) must be reverse-engineered by decapping the chip. By the way, decapping the chip may be very difficult and expensive.
However, there's no need at all to hardcode the right PIN inside the usb key nor the AES encryption key. THe user enters a PIN using the numeric pinpad, the algorithm computes the AES encryption key on the fly from that PIN and tries to decode the data. Finally, some crc or hash check can be performed to verify that the entered PIN was the right one.
I may have not explained well my attack...
Yes, the attack would be prohibitively expensive, which is the point of hardware security because none are foolproof. We just want the enemies to look at the task ahead of them and say "it's not worth it."
Due to the many attacks, though, I generally don't recommend the PIN alone be the security. For one, a secure microcontroller should store the permanent secrets and produce the key from the PIN. Two, I'd prefer a design similar to the NSA's IME where a PIN is combined with a Crypto Ignition Key. The form factor is about too small, though, so I was toying with the idea of a tiny key-like crypto device used alongside a high security product like the thumb drive. The key device works with all devices the company makes & basically authenticates those devices using a shared secret unique to each storage device. The keylike device would store the master key used for encryption, sent to thumb drive over a secure tunnel after it authenticates. Each would probably use 1 USB port, but you could take the key device out after it transmits the secret key. It could store keys for a number of devices. While this scheme is slightly cumbersome, the user would only have to protect & always carry that one key. Without it, the storage mediums and such would be useless. Note: the thumb drive would still have a PIN entry, but the PIN would be mixed with the secret key stored in the key device. Upon use, the decryption key would be overwritten with pseudo-random data.
Alternatively, the key device could have some kind of tiny connector that would be present on all company devices and be plugged in directly. This eliminates the need for crypto during transmission, but might raise the cost by using unusual connectors. The company could use it in a variety of medium assurance products: secure storage; VPN's; link encrypters; authentication systems. There will have to be a revocation or reset scheme for when users loose their key, though, and that must be done *very* carefully. It's likely to be the weakest link.
@ Nick P
Very interesting idea. It reminds me of the trusted platform module, in which users can safely store their account passwords and encryption keys. The only difference is that the TPM resides inside the computer (as a chip soldered to the motherboard or as an extra cicuitry added to the die of the cpu) whereas the keylike device is an external device.
I did not explain myself properly.
What I was proposing there would be a separate (secret number say 256 bits) on each device (a PUF probably), this would be mixed with the user PIN to form a unique key, this key would be so large that attacking the PIN to AES key process would be useless. I think NickP is suggesting the same.
Unlike Nick's solution I would not allow the device secret number to be accessible (or programmable), in any way (other than for internal AES key generation). Adding access to this internal key (PUF) creates to many security weaknesses. My solution is likely to make me sound like a fool, because the current thinking is that public key methods are better suited to the task, the logic being that any stored secret key is like a huge flashing sign to a hardware hacker (dig here for gold)
My only defense is that I've always maintained that the data should be first encrypted on the host computer, before being sent to any external device. Additionally I do not use a single secret key (common to all devices) rather each one has it's own secret key, which is completely unknown outside of the chip. Discovering the secret key is only possible by destroying the device (at which point the key is useless to you and everyone else). If you assume your adversary is capable of extracting the PUF key and constructing a similar device with your key, than I'd respectively suggest that "that data" does not belong on ANY portable device.
In the this existing product the 5 key (10 digit) PIN is the obvious weakness.
For real security this user "PIN" needs to be at least 256 bits long and possibly stored in some master key secure style tag. The usb drive needs to make a secure tunnel. Nick's solution makes this master key another USB device which must be plugged into the host at the same time as the encrypted drive. I would probably make the master key tag a secure RF tag.
@ Nick P, Gianluca,
If it was me designing the product I'd,
First : make it look exactly like any USB extension "numeric keypad and tracker ball with memory card reader" and make it function that way as well.
Thus having it with a laptop etc is perfectly plausable and desirable. You could also put a longer than normal lead on it and a little backlit LCD and make it effectivly a "presentation" controler as well.
Second : the flash storage would be an SD or other memory card and removable it would be perfectly normal except for a "crypto container" file.
Third : a "Crypto Ignition Key" as a VLF RFID key fob. (Or using a cheap RJ11/RJ45 connector etc).
If the key pad device had USB "on the go" then there are all maner of other possabilities.
Essentialy the keypad device would send out a challenge to the key fob and vise versa via a "shared secret" protocol of some kind (which one does not overly matter as long as it's replay proof etc).
Once the keypad device and key fob have negotiated and recognised each other, the "root directory list of the memory card" is sent to the fob. If the fob recognises a valid container name it lets the keypad device know that a pin should be entered.
If the user types the pin on the keypad within a short time out period, the pin is sent to the fob (but not the computer).
The fob then requests the first block of the container file and uses this "serial number" along with the pin to look up and decrypt the "AES Token" in it's own internal DB.
It sends the AES Token to the keypad device that then decrypts the token to get the actual AES key.
Thus to get the AES key you need to have,
1, A valid container.
2, The valid PIN
3, The keypad "secret"
4, The fob "secret"
The fob will store more than one AES Token per container serial number.
Thus several keys are available selected by the PIN the user types in so administerative and duress keys etc are possable (ie container is multi level and user thus has reasonable deniability).
My designs specifically avoids trying to be deniable. If your scheme is secure, it doesn't matter if you advertise the importance or not. Sure, deniable schemes reduce your risk of being randomly targetted or risks against availability of data. However, deniability provides little protection of assets except against the most casual or inept thieves and cops. The Feds I know will take every device you have "just in case," but my previous designs aren't really for protecting against Feds: they know how to find TrueCrypt "hidden" (Nick laughs) partitions using free tools online and if they are after you then you have much more to worry about. Tiny flash drives, untraceable netbooks, innoculous RAM-based Linux distro's, good hiding spots & innoculous ways of storing/retrieving these are best way to hide stuff. I've got tons of proven schemes that's I've designed and picked up from others, but if I share them they aren't deniable. (sorry)
That wasn't the point of my posts, though. I'm more worried about the people who are actually targetting me and my data: casual thieves; script kiddies; prick low-tech physical hackers; good software hackers; good, but underfunded hardware hackers; sophisticated and well-funded attackers. My thumb drive scheme protects you in five out of six cases most of the time, but not the last because it's cost prohibitive. The cops target you if you are: diskliked; conspicuous; somewhat suspect; substantiated suspect; totally guiltly except for a court hearing; true rebel/saboteur/terrorist. Being a nice, helpful guy deals with the first three and sometimes fourth. Good hiding and mobility strategies prevent the next two from being a problem. The last one requires paranoia beyond most people can't stomach and I won't even try to help you. So, why are you guys focusing so much on deniability if it buys you little against any serious attacker and theft losses can be defeated with a backup mechanism for encrypted data?
I was referring to an onboard secret that was diffult to access, but not PUF's specifically. I believe you, Clive and I have had the discussion on them before. I'm still avoiding them for now because their security properties are controversial, just as biometrics were early on (and rightly so). I also specifically avoided public cryptography because it is too complex, bug-prone, & resource intensive. We're talking about a thumb drive here whose bill of materials must be dirt cheap. For reference, my mutual authentication & master key generation scheme is very similar to the one Clive mentioned. There's some details that are different, but the principle is the same: one must compromise two devices, one trusted & one semi-trusted, and a user PIN to decrypt whats on one device. This situation is hard for both casual thieves and smart attackers because the ratio of work to protect it vs work to break it works in users favor.
If we aren't using a validated PUF scheme and we want to be able to loose the drive, then the secret [partly] has to be stored elsewhere. User's memory, users PC (must be secured), or some kind of key fob. The first is unworkable, the second is very impractical (currently), and the latter was the basis of my strategy. Sure your plugging two things in, one on your keychain and one from briefcase, but how hard is that? Just never loose the key thing and you can loose anything that depends on it without worry. Users are also already trained to understand the importance of and protect keys. This is why the NSA's "Crypto Ignition Key" device looks like a key: builds on user's ingrained habits. Password habits, laptop habits, web surfing habits and the like all suck in comparison. The only worry is lost or stolen keys. I'd try to reduce the risk with carefully conceived revokation, erasure and recovery mechanisms, but prevention is the best strategy with tiny devices & the key fob thing helps there. Btw: it doesn't necessarily have to be USB, but I'm not experience enough in embedded hardware design to know what connectors are tiny & still cost-effective in quantities of tens of thousands so I just said that by default.
@ Clive on deniable, secure devices
Ah, the sleeping giant awakes. Trying to steal my thunder old man? :P I'll admit your trackball/numberpad idea is pretty nice & for more covert activities than you mentioned. Since I've already talked about the deniability issue in general, I'll focus on the product level here. The main problem about your plan is that your deniable device must not be deniable to succeed in the marketplace. Let me explain for the sake of the others. Products secure enough to mitigate all risks I mention above usually cost a lot of money to develop, test & independently evaluate. They are also relatively costly to produce due to extra muscle or functions needed for crypto, zeroization, etc. Any company that develops something like that will market it as aggressively as possible to recover their investment. Green Hill's overmarketing of INTEGRITY & General Dynamics remodeling "Sectera" government products into "TalkSecure" commercial ones are good examples.
The point? If a company invests in and markets your device, all attackers who read slashdot, engadget & schniers blog will know about & target it in weeks. A percentage of the rest will follow. Unless they design, test & build it too cheap to be secure, it won't be deniable. That's my theory & the field of spy gear & success in compromising "encrypted" COTS thumb drives seem to support it. Tremendous numbers of regular or professional attackers read up on things. They know what to look for. So, even if deniability doesn't work, the security should be good enough to work. Unfortunately, designs that are secure and affordable can't be deniable.
@ Nick P,
"Ah, the sleeping giant awakes. Trying to steal my thunder old man? :P"
I do wish people wouldn't hammer on about my "god like" apperance, it's a bit of a Thor point 8)
(Just like the end of my thumb right now, I was busy doing the cooking and dicing an onion when yeouch I sliced open a little fingernail sized flap, and it would not stop bleading, the result supper an hour late and my thumb to big to type with at the usual rate 8( mind you I'm excused KP / dish wash for a few days 8)
"The main problem about your plan is that your deniable device must not be deniable to succeed in the marketplace."
I was not using deniable for it's "function" but deniable as in "not in use" or "insufficient privalages" (ie not admin level etc).
We see this with SD cards they are so common they don't attract attention as a "crypto" device, even though they are in most cases. They are just seen like MMC cards as "mass storage". Likewise with phone SIM cards as "they make the phone work" not "crypto engine".
Often the easiest way to hide something is in plain sight, and the easiest way to do that is "familiarity breeds contempt". You don't hide the capability you just make it a "minor feature".
The whole point of making the flash memory a removable memory card is the same as the ignition key (fob) you have one more stage of seperation so it is just like an IME from that point of view.
However the side effect is unless you have a memory card "WITH" a "VALID" container file, you don't have an IME. What you have is a usefull keypad/vanilla memory card reader/USB on the go/ USB hub device.
Likewise the binding of the parts together so that any one part is missing none of the others can be used to build the AES key, renders it just a usefull device not an IME.
"Products secure enough to mitigate all risks I mention above usually cost a lot of money to develop, test & independently evaluate."
If the design is done correctly the expensive part is the "independently evaluate" and there are ways to reduce that cost as well (use a standard EAL4+ crypto card, or SIM card as the crypto engine).
"They are also relatively costly to produce due to extra muscle or functions needed for crypto, zeroization, etc."
The big up front cost is generaly in "tooling" not component cost.
Likewise in many cases the cost of placing an item like a resistor or capacitor onto a surface mount PCB is more expensive than the component it's self.
As for "heavy lift" or "silicon grunt" the price of CPU's with inbuilt crypto is down to about one USD difference if you shop around (have a look at SIM chips for instance).
As for "zeroization" we've had a chat about this in the past and it is a design trade off. In some respects the price is a diode electrolytic cap four resistors and a bit of interupt driven software.
Even the NSA accept that AES encrypted data is secure to "Top Secret : Codeword" without the AES key (or so they tell us with their IME blurb ;)
The real expensive part is "anti snooping" that is TEMPEST and Tamper Evident design. For a comercial design the later can be achived by making it throw away epoxy filled, the former is currently/probably not of relevance outside of certain circles.
"Any company that develops something like that will market it as aggressively as possible to recover their investment."
Yes and no it depends on many things primary of which is the "sweet spot" on sales volume -v- worth to the customer. Part of the latter is "utillity".
If I put five devices in one my production costs are only marginaly larger than that of producing the most expensive item on it's own. However the utility to the customer is that they get five usefull devices in one for the price of say three seperate items. Thus you have the potential to make a significantly larger proffit.
We see this in the mobile phone market where a modern phone is,
Phone + media player + games console + internet browser + PDA + camera + video conferancing +... + conveniance.
All for the price of a basic phone + basic digital camera.
Have a think about what a memory card reading key pad, LCD and "USB on the go" hub device can do... irespective of being a mid range IME. And never never underate the utility of "conveniance" to a "mobile worker" cosumer.
"Unless they design, test & build it too cheap to be secure, it won't be deniable."
The point is not to deny that the unit has strong crypto features, it is simply to "hide in plain sight" by making that asspect just a small part of the devices utillity.
What is unknown to most people is that a simple phone with USB accessable Mass Storage Class ability can with only a relativly small software upgrade use the crypto functions on the SIM... It won't be a quick IME but it's all there apart from a few lines of code. And the market price on some of these phones is sub 150USD...
And the reason the phone manufactures don't do it as standard well it's the presure of phone service suppliers.
The phone service suppliers have an identity crisis occuring, competition and economic climate are forcing them down the same path as ISP's in the UK (and probably other places). That is they are finding themselves clasified as "bulk service providers" which is a very very cost sensitive and thus almost proffitless position to be in when your buyers are "fickle home consumers" who can drop you with the blink of an eye.
The iPhone store model is what the phone service providers are all desperate salivating for. Where they (not Apple) take a 30% TAX on "must have apps" designed by others. It is as others have noted "a nice little earner" or "money for old rope".
Google amongst others have recognised a "barrel" they can strap not just the phone service providers over but other third parties, at the same time as improving their market position against the third parties (Microsoft for instance).
So keep your eye on Android et al apps they might end up being a nasty shock to many "security product" suppliers quite soon ;)
Getting back to your point's
"That's my theory & the field of spy gear & success in compromising "encrypted" COTS thumb drives seem to support it."
Is only true within it's limited viewpoint. Exploiting utility and conveniance changes the viewpoint dramaticaly and makes your point false in that wider context.
"Tremendous numbers of regular or professional attackers read up on things. They know what to look for. So, even if deniability doesn't work, the security should be good enough to work."
The "weasle words" are "should be good enough to work". Specificaly "to work" is the viewpoint of the customer not the supplier. It is why we are seeing ludicrously insecure items sold as meeting Sarbines Oxley.
As pointed out by BF Skinner, under some view points "two passwords" is "two factor authentication". And the
security auditor just nodds it through with a tick in the checkbox (even if the two passwords are "1234" and "abcd" as a UK building society used on it's cash machines for years...)
The advantage of adding more "utility and conveniance" is although the consumer price might be double or tripple the profit might be more than ten times, thus you can have a broad range of products which are effectivly all the same from the internal design point of view.
This has a strange "inventory" effect as seen in the car industry, lock industry and in semi-conductor suppliers such as Intel and AMD.
It encorages a single high end base design for the internal mechanics or electronics put in many products across a very wide value range. The only thing that realy changes is the "packaging and marketing focus" and the cost to the consumer.
Which generaly means you get the base design designed for high end functionality and the cost amortised across the whole range. And thus considerably more highend features in lower spec lower cost devices (it's why overclocking works so well).
I'll respond to that post later after work but was quickly posting regarding one statement...
"busy cooking and dicing an onion"
Does this mean they finally let you out of the hospital? If so, then congratulations on your escape! ;)
"Does this mean they finally let you out of the hospital? If so, then congratulations on your escape! ;)"
I don't know exactly the story, but I hope too!
@ Nick P,
Yes they have let me out on furlow because of my bodies good behaviour, but in reality surgery was cancelled due to equipment failure...
So I've been assigned (not so) light domestic duties by my Ex as apparently It'll stop me being a "usless invalid" (I remember the stress being on the former not the latter). So I have the enforced pleasure of my son's company, and he being 8 and some what hyper (even when he's asleep) is ensuring that the hospital requirment of "rest and light excercise" is being sorely tested ;)
So this morning I'm venturing into the loft to get out the prototype robot's I built some years ago and teach him some practical engineering skills.
However the after school club is attempting to teach him another cooking skill this afternoon. Apparently fruit kababs whatever they might be (sounds like a vegi assult on a manly institution to me ;)
@ Gianluca Ghettini,
"I don't know exactly the story, but I hope too!"
I don't either nor do the medical proffession but we like you hope too as well and the sooner the better for all concerned 8)
Put simply I've been somewhat hard on my body over the years including falling 300ft down a Scotish mountin side. Various physical activities like Rugby mountin climbing fell walking/running and bouncing of the front and sides of cars and lorries whilst cycling and other physicaly strenuous activities whilst wearing the green have aparently caught up with me. So I spend a good deal of time in pain of one form or another which stops me sleeping (I've had to stop tacking the pain killers for medical reasons).
However the pain has hiden some underlying issues that have been slowly sneaking up on me as well and it is these that are currently putting me in hospital one week in four.
But on top of that are the issues from a botched operation or three and the results of having had my head flying kicked into a metal pole when on my way to work one day giving a full fracture of the lower jaw on the point. Which the maxiofacial surgeon chearfully informed me was a first for him (apparently it's normaly only seen on the rather abruptly deceased in cars etc).
The trouble is that they are minor problems by themselves thay conspire to fight each other one way or another. For instance PE's and DVT's (blood clots in lungs and extrematies which requires ingestion of rat poisson in substantial quantities) against intestinal bleeding which require the opposit treatment. Between these two alone I've been in hospital after three episodes of PE/DVT and four episodes of blood loss requiring blood transfusions or fluid recussitation since August. Then there is the three or four atypical bacterial attacks around a surgery site that mean a week in hospital for exotic IV anti-B's. Oh and of course these medications react adversly with the rat poison and other medications I'm on as well as my problematical liver (apparently it's enzime output says I'm a serious alcoholic but it's physical apperance and the fact I hardly drink says otherwise, so they think it may be an effect caused by the cocktail of drugs I'm on, hence no more pain killers and sleeping tablets etc)
So the medical proffession are trying to find a sweet spot that will keep me out of their clutches, me I'm thinking of getting a "time share" on a hospital bed ;)
Anyway as various people keep telling me "while there's life in the old dog..." there's the oportunity for the medical proffession to play...
I'd be rather interesting to know, if this thing can't be defeated by disconnecting one pin from the circuit, like we have seen already with some other supersecure flash drive which was, I think, also reported here long ago.
"The White paper suggests it is just an Enable "lock-out" protection, probably implemented on a cheap flash based microcontroller like a PIC or 8051. "
The one that posted the white paper confused the second version with the first version, which was indeed exactly what you describe (a Dutch researcher was able to bypass it just by hooking a resistor).
Jarda: Yea, that is the entire point of the second version of the Padlock, to fix this flaw that existed in the first version by actually using hardware encryption.
Corsair claims to generate a unique-per-device deterministic random number (presumably and hopefully securely stored within the control chip and not the flash) and also generates a non-deterministic random number when the PIN number is created (a session key), to pad out the full 256-bits required for the key (or XOR with the PIN, I can't be sure yet).
So I wonder if this actually could be much more secure than it seems?
If the keys are securely stored within the control chip and can't be accessed without the PIN or opening/microscope etc and no data goes to flash unencrypted, then this seems secure to me (assuming no mistakes with implementation). No?
See the reply from Corsair at the bottom of this thread:
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.