Schneier on Security
A blog covering security and security technology.
« Locksmiths Hate Computer Geeks who Learn Lockpicking |
| Midazolam as a Non-Lethal Weapon »
July 18, 2008
TrueCrypt's Deniable File System
Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.
ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.
The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.
There are several threat models against which a DFS could potentially be secure:
- One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.
- Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.
- Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.
Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we've uncovered. In the paper, we said:
We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.
So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn't trust it.
There have been two news articles (and a Slashdot thread) about the paper.
One talks about a generalization to encrypted partitions. If you don't encrypt the entire drive, there is the possibility -- and it seems very probable -- that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.
Our paper will be presented at the 3rd USENIX Workshop on Hot Topics in Security (HotSec '08). I've written about deniability before.
Posted on July 18, 2008 at 6:56 AM
• 74 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I never considered the plausible deniability claim by TrueCrypt to be accurate. I don't think it is genuine plausible deniability.
What is needed is a system in which the protected information, encrypted or not, cannot be proven by an adversary to even exist. As soon as an adversary encounters what is an OTS product, a password challenge, or what is obviously encrypted data, it's all over. It doesn't matter if the data is encrypted or how, or if the data is hidden on unformatted sectors. And any kind of password challenge is a complete giveaway. True plausible deniability is not easy.
The problem of "deniability" is not just MS OS'S but virtualy all multitasking OS's.
It is to do with the trade off between efficiency and security.
As a rule of thumb (that I have yet to be seen broken) the more efficient a system then the less secure it is.
It is a more specific case of limited resources -v- security. That is the OS designers assume that resources are going to be limited in one way or another, for example system RAM is not going to be sufficient and also to slow, the result data is cached in both slower memory (hard disk) to make up for the lack of quantity and in much faster memory in the CPU.
Over and above the computer operating system is the windows manager and applications. The manager is designed in a similar way to the underlying operating system. This is because the applications are normaly designed to assume no resource limits and the windows manager and underlying OS must support this.
It is going to be difficult if not impossible to reverse this trend in design, except for very specialised security systems.
So the solution needs to be sought in another way.
That is where you have an operating system that makes the assumption that there is no hard disk to cache to and that the mutable storage is fragile (ie has very limited number of writes) such as Flash memory from a few years ago.
As an example an OS for a Diskless Client, or designed to run from ROM etc.
So "Knoppix" and a USB drive etc being the easiest to set up etc.
Also as several people will know NT based systems can be setup to run from ROM as well but it is not an every day task to do.
Obviously systems such as these, not having hard disks to write to, cannot leave traces on one.
The price is having to have a restricted system to work in where you might only be able to open one or two applications. Personaly I do not think this is a very high price to pay for the level of increased security you gain.
I often wondered why boarder guards don't ask for a defragmentation of Truecrypt volumes that may contain a hidden one. Then you can only deny plausible if you've got a good backup.
Bruce, what about having a an hidden encrypted partition on a thumb drive? Would the OS leak that information or would it be safer because it is on a removable drive?
"I often wondered why boarder guards don't ask for a defragmentation of Truecrypt volumes that may contain a hidden one"
Defraging tools generaly have a very predictable behaviour, it would not be two difficult to design a hidden drive that would hide away at say the other end of the disk. As long as the user never put enough data onto the hard drive so that the hidden and normal data storage colided then no defraging would not touch the hiden data as it has nothing to shift (it does not know about the hidden data) and has no reason to move data upto the hidden area.
With the price of hard drives today (since you can buy a hard drive for $200 today that is larger than the sum of all online storage for the entire earth cumulatively thru 1968 or so) it should be feasible to have a RAID-like array where significant quantities of information is hidden in the interaction of the bits that actually store other files. Of course I still dont know how you'd make that deniable once it was no longer obscure :(
Homophone alert: you have the wrong kind of 'principal' in footnote 3.
@Mike, removable media may still leak the data as the paper pointed out, which in turn affects the deniablity of said data... Read section 4 and beyond on page 3 of the paper.
The issues in your paper seem to apply more to Windows operating systems, which we all know were never built to be secure (and certainly not deniable).
In a sense, with Truecrypt 6.0 coming out a week ago, your paper is already obsolete. Not only does the newer version allow for hidden operating systems (so that individual applications can no longer affect deniability) but hidden volume support has been added back into the Linux and MacOSX versions.
Also I think the Intermittent/Regular access attacks would be impossible if a hidden volume was nested inside another hidden volume (which would of course be in a Truecrypt container volume).
I would really love to see a follow-up paper, looking at Truecrypt 6.0 in particular:
* a well-setup Linux machine (of particular interest, the /tmp directory on its own partition - perhaps encrypted - and periodically shredded),
* the hidden OS mode,
* nesting of hidden volumes.
"In a sense, with Truecrypt 6.0 coming out a week ago, your paper is already obsolete."
The general points are still valid, but an attack against an older version of TrueCrypt is much less interesting.
That depends on which one is more important to you: keeping the existence of the data secret or keeping the secret data existed....
Furthermore, there is a legal question: are the ones that are searching you also allowed to tamper with / brake your stuff? UK authorities can force you to surrender the key to an encrypted drive -- but are they allowed to destroy the drive simply to find out how you react? I don't think so.
Less interesting to people already using the new version.
More interesting to any border guards etc. who have a bunch of full-disk images lying around out back that they can now re-analyse based on the new findings.
It would be interesting to see how google desktop, open office (star office, M$ Office(on MAC)) might leak data in the newest version.
I find it remarkable that the issues pointed out an the paper were addressed so quickly, and let's hope thoroughly. Even though the newer version addressed the issues, I still found the paper interesting (even if it was *obsolete*;)
BEWARE anytime someone says if only you used Linux, or anything NOT Microsoft, then you wouldn't have any problems. It's nonsense. They're just repeating the same stuff they heard others say. Open Source is great, but most of the anti-MS crap is mindless university sub-culture.
Typically they will compare a flat Unix file system to a Windows system hosting various applications, then demo permissions, etc. Or, they'll blab on and on about the kernel, etc. MS has definitely made mistakes, but they also went places no one else did. And they made it available to masses. Most vulnerabilities aren't in the kernel to begin with. No matter what OS you use, as soon as you ask it to host an application the security is inherently weakened. And that goes for ANY system. Once you tell the OS to trust an application, or any one of a thousand applications with access to I/O, mem, etc. don't compare a Unix kernel with a Windows system.
BTW, I am not interested in PD because I anticipate a confrontation with secret police, or any other sort of authority, nor because I store and carry illicit material. It matters because the same aspects are valuable in detering an attacker - if they don't even know it exists, they can't hack it.
Even if a "hidden" file system is not obvious, if an OTS product such as TrueCrypt responds differently to one machine vs another, it is no longer hidden. And in many countries, what good will encryption do? They will just throw you in jail.
I thought about this a bit on my own, and the solution that I think is practical to a certain extent is to use "portable" apps when you need to access your encrypted data.
Alice has her "deniable" volume, and she also carries a USB key, also disk-encrypted with a deniable volume, using WinPenPack or Portable Apps. Each volume has a separate set of keys. Alice practices a discipline such that she only accesses her encrypted data from the WinPenPack.
This of course will not alleviate the issues due to indexing and other OS operations on the deniable volume. However, a potential solution to this as well would be to carry an entire encrypted virtual machine. If the deniable volume only held a VM disk, such a disk does not readily expose data to the operating system. To make the entire thing truly portable, though, one would need the VM component to be on the second, removable media device while the VM data resides on the computing device in a deniable volume.
I'm unsure if there is a practical combination of such tech available today - possibly if one had a bootable ISO of a Knoppix distribution on the removable media, with Xen or something similar compiled into the kernel.
I could envision such a package and method as being not *too* difficult to deploy and develop as a discipline, although it has significant practical disadvantages over simply accessing your encrypted volume from your primary OS on your primary device.
The thing I found on the annoying side on the paper was they focused quite a bit on leaks in third party programs, not in truecrypt it self. That should be by default a huge "DUH" -- anyone who is paranoid about the security should know that MS and indexing programs will leak like a sieve. That is not a fault of Truecrypt (or other DFS). You want secure, you use a bootable cd/dvd. No chance of leak.
I did find the snapshot a very interesting proving vector. Their isn't a really good defense against that accept some sort of random swapping of sectors by truecrypt -- but your talking some overhead to create the system so that it could handle sectors that move outside of the mounted volume.
what about a bootloader that would
1) after entering the correct password, boot your OS
2) with a blank password boots a 'clean' OS for TSA etc
3) with the wrong one boots into self-destruct and starts erasing the disk (but keeps displaying nice XP screens to keep the intruder hopeing for success and gives it more time to erase)
i take my disk out of my dell (two screws and the tray slides out easily), and let my wife take it, so if anyone wants to see my laptop it's just a useless piece of hardware!
Surely the existence of a entry in a Most Recently Used list, is proof that the file once existed. I had a tidy up of my desktop today so my MRU contains a lot of entries that point nowhere. Those files have now been deleted to DOD standards so truly do not exist. Is my denial of their existance plausible to you?
@neill The new version does most of that, two different passwords boot two different OS's. The self destruct idea is interesting, but if TC 6 is a significant improvement, probably not necessary, especially if you use the "decoy" os as often or at least on the same days as you use the hidden os.
I think you may have misunderstood how TrueCrypt's deniable filesystem works. The idea is that there are two separate filesystems. Which one you get depends on which password you enter. So the existence of a password prompt proves nothing. You just say that it's for your encrypted drive which, of course, as a smart business traveler you keep for all of your important business documents. Then you put in your decoy password, which mounts your decoy filesystem with plausible business documents, and no real information.
Of course the fact that TrueCrypt is known to be able to have a decoy partition may then make your adversary suspicious, and he could go off looking for the hidden partition (which, I am told, is not as hidden as claimed if you look at things like HD sector access records) and use rubber hose cryptanalysis on you until you give up the real password.
I agree that true deniability is really tough, but TrueCrypt isn't quite as obvious as you make it out to be.
I'm curious - how does TrueCrypt prevent the decoy filesystem from overwriting data on the hidden one?
More specifically, when you have the decoy mounted and write to it, if the filesystem knows what blocks are allocated to the hidden system, then anyone given access to the decoy could prove that there is additional, hidden data.
If it doesn't know what blocks are allocated for the hidden filesystem, then writing to the decoy could potentially overwrite and corrupt something hidden, and not having any recent modifications would be a good reason to suspect that you're looking at the decoy data.
i used "defective sectors" in the past to hide data (marked as def. by me) not a whole partition (few GB continous blocks are too easy to spot) - those sectors usually won't be bothered by any OS
but then, anything your CPU can understand could be disassembled
i think real secure is only a hardware solution that remaps&encrypts sectors depending on the password, maybe some nice disk-controller-firmware hack
Following up on my earlier comment:
Isn't the real issue here - even aside from the OS - badly-behaved applications?
If you are using your primary device's OS and apps to access your deniable filesystem, then any MRU or other pointers left by the apps you used to access it serve to defeat your security, by pointing to the (now-empty) location left behind when Alice dismounted her deniable filesystem.
Windows is a poor environment to enforce that security, because application behavior and security is highly fragmented and not subjected to a central API (MacOS comes closer to this - or even the iPhone) that would govern how those MRU's and file data can and would be cleaned up after access.
Again, it would seem that the real solution here to the issue would be to either use truly portable apps (assuming you can somehow sandbox your encrypted volume from the regular OS), or to use an entirely virtual client system to access your encrypted data. How else are you going to enforce the needed trust?
"BEWARE anytime someone says if only you used Linux, or anything NOT Microsoft, then you wouldn't have any problems. It's nonsense. They're just repeating the same stuff they heard others say. Open Source is great, but most of the anti-MS crap is mindless university sub-culture."
Sure, sometimes. However is this instance the truth is that the vulnerabilities in Bruce's paper apply to Windows, and *cannot* necessarily be mapped onto Linux. The strict file permissions, and mounting of separate partitions in Linux gives it a clear edge, security-wise. Re-iterating some of the risks in the paper:
1. virtual memory (swap file)
- in Linux, just turn the thing off (swapoff -a) before you decrypt the drive. Also on Linux, as the swap file is actually a partition it won't move around the disk, so you can securely shred it periodically.
2. programs writing files all-over the place
- file permissions mitigate this issue. Let's say my home directory /home/USER is an encrypted hidden partition. I might be working on some documents in there, and I know there is a chance autosave files will go elsewhere on the filesystem - but where? Well the default permissions only allow a user to write to their own home directory (which in this case is encrypted) or to /tmp. You can encrypt /tmp using Truecrypt, or you can temporarily disable it by mounting it in RAM so that on reboot the data is lost.
Windows does not provide any of the above abilities, it does not have the same flexibility because it has a different target audience. As a result of this it is more difficult to keep it under control.
> The students did most of the actual work.
Now I know why Bruce went corporate over academia. C'mon, Bruce, don't you know that this sort of admission is verboten! ;-)
"Isn't the real issue here - even aside from the OS - badly-behaved applications?
Again, it would seem that the real solution here to the issue would be to either use truly portable apps (assuming you can somehow sandbox your encrypted volume from the regular OS), or to use an entirely virtual client system to access your encrypted data. How else are you going to enforce the needed trust?"
No - the problem is the OS. All the individual applications in the world cannot be expected to be "polite", based on the assumption that one of their users might to trying to achieve deniable encryption. Many of the application developers do not understand procedural security. Take for example a Family Tree making application - is the author really going to care about a temporary file giving away the mere existence of a directory? Of course not.
The behaviour of applications has to be enforced by the OS and by the filesystems. Windows should probably have a secure mode which, when activated, severely restricts where programs can write files. This would restrict what programs can run to only the "well behaved ones" you mention, but it is the only way (without hidden OS) to keep deniability.
> 1. virtual memory (swap file) - in Linux, just turn the thing off
> (swapoff -a) before you decrypt the drive.
This is a workaround, not a solution. And really, if your encryption tools require the user to do something to the operating system before you use them, they don't work. (P.S. -> you can easily turn off the paging file in Windows)
> 2. programs writing files all-over the place - file permissions
> mitigate this issue.
No, it doesn't. Trying to solve this problem with file permissions just means you're going to wind up having things fail because they can't write where the programs expect to be able to write. This isn't a "windows" problem, it's a "people who write software for windows" problem. Well behaved Windows applications write everything to the user's profile, which has been the default location for many years now.
It's also not a relevant differentiation between UNIX and Windows because you could take this "deny write permission" tack on both OSes. (In fact, one of the rare advantages of Windows over UNIX is that the NTFS ACL's have more group and user level security settings on NTFS, they're just rarely used properly).
I thought the paper was very clear and readable. My main reaction -- these vectors would be better-handled though virtual machines than through custom boot loaders -- seems to have been covered by stygmata above.
As far as merlin's comments on Linux-based leakage vectors are concerned, I would add that many Linux distributions package a version of mlocate/updatedb, which indexes -- as root -- all files on all available hard drives. If the nightly cron-based update should happen to run while the hidden volume is mounted, you're busted. So, if you're hiding a volume under Linux, take updatedb out of cron, or even better, uninstall it altogether. 'find' won't work for you anymore, but then again it won't work for the secret police either.
I thought the paper was very clear and readable. My main reaction -- these vectors would be better-handled though virtual machines than through custom boot loaders -- seems to have been covered by stygmata above.
As far as merlin's comments on Linux-based leakage vectors are concerned, I would add that many Linux distributions package a version of mlocate/updatedb, which indexes -- as root -- all files on all available hard drives. If the nightly cron-based update should happen to run while the hidden volume is mounted, you're busted. So, if you're hiding a volume under Linux, take updatedb out of cron, or even better, uninstall it altogether. 'locate' won't work for you anymore, but then again it won't work for the secret police either.
Sorry about the double post, I thought I stopped the first one after submitting it with a typo ('find' instead of 'locate').
"That depends on which one is more important to you: keeping the existence of the data secret or keeping the secret data existed...."
What if the DFS had two passwords? The first open the DFS normally. The second, an "extreme duress password" appears to open the DFS normally, but activates a data destruction routine that deletes or overwrites a selected subset of the data in the DFS.
The user would maintain obviously sensitive data in the normal DFS area, but also maintain critical high sensitivity data in this emergency destruct area.
The user would have given up the first DFS password only under duress, anyway. That means the user is already up a creek without a paddle relative to the attackers / authorities knowing there was an intentional attempt to hide data.
If the attackers do not realize the double-wrapped data was deleted as they opened the DFS, they are happy because they found the cloaked data. The user is found out, and will be prose- or perse-cuted as appropriate by the attackers for having hidden the data. However, the user is happy that the truly sensitive data has been destroyed and not fallen into the hands of the attackers.
If the attackers do become aware of the existence of the destroyed data, then they may be even more upset at the user, BUT...
The user was already in trouble with them, anyway, for trying to sneak data past them, and, that data still is denied to the attackers because it got destroyed during the process of opening the DFS. The user will face the same hell as in the first case, and still has prevented the attackers from seeing the highly sensitive data.
@ Merlin, Patrick Cahalan,
The problem is below the application level for good reason (as I noted above).
It could be argued in some cases that some of the problems are even below the OS level, in that some Device Drivers that cache data for multi-phase commit etc will also cause problems.
However the main problem for uncontroled and unpredictable information leakage at the OS level is data buffers in the kernal...
That is when data is writen to a hard disk it may not be a full diskblock in size, likewise it may not be a full OS buffer in size either. Some OS's but most don't clean out buffers after a write to disk to ensure that sensitive data does not get left to be writen out somewhere else on disk.
As I noted above efficiency and security are usually in conflict, efficiency says don't zero the buffer security says zero it. In this "performance driven" marketplace which do you think is going to get priority in a general purpose multi tasking OS?
>This is a workaround, not a solution. And
> really, if your encryption tools require the
> user to do something to the operating system
> before you use them, they don't work. (P.S. > > you can easily turn off the paging file in
No, it is not a workaround. It's a procedural security measure. Passwords for examlpe require the user does not write them on post-it nots and stick them on the desk.
You can turn swapping off on Windows, but it requires a restart. On Linux you can swapon/swapoff whenever you like. Also the usage of the virtual memory is completely different on Linux and Windows... even if you have 2GB of RAM Windows will still use VM, however Linux only uses it once the memory gets 70-80% full.
> Trying to solve this problem with file
> permissions just means you're going to wind
> up having things fail because they can't write
> where the programs expect to be able to
Err no. On Windows yes, but as I've already said Windows (or rather the backwards compatibility of Windows) is the problem. There are many other OSes out there which effectively and securely implement proper file permissions and DON'T have programs crashing as a result.
> This isn't a "windows" problem, it's a
> "people who write software for windows"
> problem. Well behaved Windows applications
> write everything to the user's profile, which
> has been the default location for many years
But there is no *enforcement* to write to the user's profile. Also the user would have to edit a registry setting to put the Documents and Settings folder on an encrypted partition.
> It's also not a relevant differentiation
> between UNIX and Windows because you
> could take this "deny write permission" tack
> on both OSes.
Yeah - my point is that Linux (and BSD etc) already *do* take this deny write permission. It is mature and it works - we don't have programs crashing because they can't write to /xyz... because they never could write to xyz.
> (In fact, one of the rare
> advantages of Windows over UNIX is that the
> NTFS ACL's have more group and user level
> security settings on NTFS, they're just rarely
> used properly).
ACLs are not really a security advantage - as when networks get large enough they are far too complex to properly analyse. I have worked as a Server Engineer on both Windows and Linux, and I can say that general policy is to avoid ACLs is possible.
"The user will face the same hell as in the first case, and still has prevented the attackers from seeing the highly sensitive data."
I would not be to sure on that, hidding data that is not "criminal" but just say comercialy / politicaly sensitive, is effectivly a slap on the wrists in a lot of places.
Destroying evidence is most definatly a criminal offence in just about every place I can think of. In some places it could be argued that what you have destroyed was (for arguments sake) a terorist plot.. And having destroyed it how are you going to argue / prove otherwise? As a result you could be looking at an indefinate period in an open air cage just outside the U.S.A as a guest of the armed forces without ever having seen a judge or a jury...
Unless anybody can come up with a better argument, I would say unless you know for sure that the data is going to get you a long time in jail (ie it's criminal), I would keep the data encrypted and work your way through the process untill you arive at a court/judge then show them that you are telling the truth (ie it's comercialy sensitive). The chances are you are going to be in a lot less trouble and more importantly your data can be kept sealed by court order.
True deniability is very difficult, if at all possible. At least TrueCrypt's developers are putting in effort towards achieving this goal... and not for profit.
Yes, perhaps due to marketing exaggeration or ignorance, some might interpret TrueCrypt as promising or guaranteeing full security.
Ahhh, the marketing... all those happy and smiling people in the ad... no matter how crappy or insignificant the product is. Have you seen anyone walking around similing all day because they switched to the advertised brand of cereal?
If you consider functionality-to-price ratio TrueCrypt is far superior to PGP and lots others.
Personally I wouldn't trust PGP since that's buying "a cat in a bag".
I deal with enough technological dumb-asses whose attitude is "I don't care" (until shit hits the fan and then it's someone else's fault), that I really don't care what Joe-public thinks.
What you don't know can hurt you, so do your research yourself, experiment with it, try to break it and put it back together, etc. and don't let one entity tell you what to do based on their opinions and prejudices.
I follow this site off-and-on, and I've seen someone ask Bruce if he's looked at TrueCrypt. Has he? You never know, he might actually like it, I'm really curious...
yo, great... Bruce discovered 2008 TrueCrypt. Security guru de la creme ;-)
Hey guys, crypt the *bootdisk*
@neil, I like your line of thinking, but would like to see something else.
Instead of being prompted to enter a password, you boot straight into a clean, unencrypted OS that has no trace of TrueCrypt being installed under that OS.
I would like to have it so have you press and hold a certain key combination, that's customizable so it's not some standard key or key-combo, then you get the password prompt. That behaves just as TrueCrypt 6.x does now.
You would then use the unencrypted OS most of the time, the "visible" encrypted OS a fair amount of the time and the hidden OS only for that data you truly wanted to keep hidden.
This would help any TSA person from seeing that you have "something to hide" (at least from their minimum wage, uneducated, I couldn't become a cop, I like to feel people up and I happen to not use drugs, eyes), while still keeping your private data private.
Yes, if someone did an analysis of the hard drive they'd still find there's something there, probably encrypted and with enough digging, by inspecting the MBR, they'd probably see that you at least are using TrueCrypt (or any other disk encryption technology) and thus you could be guilty until proven innocent.
I think just the fact alone that you have an encrypted disk prevents any possible "plausible deniability", especially if you are using OTS software where it's feature set is a known. If you have the skills to create custom software that can perform disk encryption, then yes, you might be able to pull of "plausible deniablity", but even then I think it would be difficult and only be as effective for as strong as your will is. If you are being held by legal police you're probably fine. If you are being held by a criminal or terrorist organization, well it'll probably be your life or your data.
let's say you had a bios that asks for the password, relays the hash to the disk controller to use as a key for aes (or blowfish, bruce!) and remaps sectors depending on blank/noblank password (there's only one specific hash for a blank one) - then you wouldn't even find the mbr anymore, and an intruder would find a nice clean OS but analysis of the disk shows "garbage" (best case:"white noise")
personally at this time in my life nothing is worth loosing it over some data though (baby V0.3)
To me this is all a forest for the trees thing. The real question is: Do you want the bad guys to not get to your data or do you want them to not even suspect that there is data? If it's the latter you'd do better disguising a usb stick as a ball point pen. You wouldn't even have to encrypt the thing.
Many of the scenarios described here just scream "I'm hiding something!"
"Isn't the real issue here - even aside from the OS - badly-behaved applications? "
Is a program that keeps a record of your most recently opened applications badly-behaved or user-friendly? I think most people would vote for the latter.
"If it's the latter you'd do better disguising a usb stick as a ball point pen."
If I really, really wanted to hide a flash memory chip, I'd solder it right next to bunch of other chip packages. It's actually somewhat expensive to tell what any particular chip does unless you read the part number and pull the datasheet.
"Many of the scenarios described here just scream 'I'm hiding something!'"
Like having Truecrypt installed in the first place... why did you use Truecrypt, rather than the OS builtin encryption (MacOSX - Vault; Windows - BitLocker; Linux - DMCrypt)? The way things are going these days, I can see the courts arguing that having Truecrypt installed is enough to assume someone does have a hidden volume. Actually in the UK now you can be forced to hand over encryption keys or face 5 years in jail (oh - sorry, unless you can prove that you don't know them)... Put all this together: if Truecrypt is on a system they could start jailing people on the assumption that they have hidden volumes.
The only thing close to a solution would be to keep Truecrypt on a (read-only) CD.
Plausible Deniability should be impossible without off-line storage.
You just can't store X and Y in the space it takes to store only X. Otherwise you've discovered a nice new infinite compression method.
Two other separate issues I've been considering are:
(1) Are you hiding your data just for your own peace of mind; or
(B) You're James Bond and you'd rather the data died (with you) rather than be revealed.
In (1) loss is no big deal, you could use rot13 and the (purse snatcher?) probably wouldn't figure it out. Or Big Brother would spare no expense deciphering your, uh, porn collection. In either case all this high-power encryption is pointless - nobody really cares.
Point (B) is what counts. What do any of us really know that we'd take a rubber-hose beating for? Heck, I'll give you my porn just by your asking for it. Corporate data? It's a foreign language to most anyone outside the company. Drug dealer data? Give up the password and you've got a ticket to immunity.
So what's left? Military espionage? Assassination plots? Kitty porn? Terrorism?
So unless this is all an esoteric argument then someone tell me why I need a hidden OS with a DFS and an external drive disguised as a cigar box unless I'm *really* up to no good, which should trouble all of us.
"surface mount" got it right: a fistful of micro SD cards hot glued to a motherboard would fool %98.8 of the security in the world - they don't even look at "raw" electronics; I've carried bare SATA drives through many airports with no incident - if it doesn't have a power button they don't care. So why bother with all this?
Disclosure: I do use TCrypt, in case of loss or theft, but unless we're overthrowing a gov't this all seems like overkill (even if it's very interesting reading, I love this stuff).
Disclosure: I do use TCrypt, in case of loss or theft, but unless we're overthrowing a gov't this all seems like overkill (even if it's very interesting reading, I love this stuff).
You couldn't pick your nose without a computer.
Actually I use needle-nosed pliers. What's your problem?!?
That remides me of:
"StegFS - A Steganographic File System for Linux
StegFS is a Steganographic File System for Linux. Not only does it encrypt data, it also hides it such that it cannot be proved to be there."
It might be an suitable FileSystem inside of a TrueCrypt-Container.
My Company and several others I've worked for, encrypt the entire HD of their laptops... Is my company hiding something? If my company installs both decoy and hidden OS's, are they hiding something, or better trying to protect their data? If I as an individual do the same, and I guilty by association for using the latest version of TC? No (dumbass). Just because TC6 can do something such as have 2 OS's on it, does not mean that I did this. So it is NOT reasonable to assume that I have set up a decoy and hidden OS just because of versioning of the application.
Just because I use the latest version of 7zip or winzip, and have a password protected archive, does that mean I'm hiding something, or protecting something?
I have to say I found the articles mentioned above to be very misleading. Probably not intentionally but rather because the authors did not understand what they were writing about. Take the darkreading.com article as an example:
"Researchers break 'deniable file system' steganography feature"
What exactly has been broken? When I read this sentence I thought the researchers had found a flaw in TrueCrypt's encryption/"hiding" algorithm. Something that leaves the part of the HDD where the hidden data is stored with a detectable pattern for the attacker to find. But this is not the case.
The article that originally caught my attention is even worse. It can found at http://www.informationweek.com/blog/main/... and claims that the DFS feature has been "cracked".
If the DFS itself had been "cracked" or "broken", it would be possible to prove if a TC container or a TC partition contains a hidden volume/partition even if it has never been mounted. This is, at least to my knowledge, not possible.
It's a bit like announcing that "SSL has been broken" because a web browser cached a copy of a securely transmitted web page, or that "PGP is unsafe" because some email clients store the email bodies in plaintext after decryption.
@ rich rumble,
"does that mean I'm hiding something, or protecting something?"
It depends on the view point of the person looking at your PC...
Basicaly if they want to know then as you are "hiding" it from them they are probably going to assume the worst untill you prove otherwise, it's Catch 22...
"then someone tell me why I need a hidden OS with a DFS and an external drive disguised as a cigar box unless I'm *really* up to no good, which should trouble all of us."
It depends on what you do for a living. Someone who does the payroll etc on a laptop that they work from home on, I would very much hope they did take these precautions.
Likewise an engineer working on leading edge or market sensitive designs should also have a duty of care to the shareholders.
How about your legal representative or your priest they have a duty of care over what you have told them.
I could give you a hundred and one people / reasons why virtualy everybody with a duty of confidence should use these tools (or the better way I described at the top of this blog page)
"Actually I use needle-nosed pliers. What's your problem?!?"
Ouch it brings tears to my eyes just thinking about it,
What's wrong with a bent paper clip? Apparently it's the second most frequent use for a paper clip other than for it's intended use (cleaning your ears is first)...
Wouldn't it be hilarious if the .deb and .exe files for TrueCrypt contained whitelisted government
trojan(s)? How would the common user detect this?
How many people install the binary vs. building from source?
Amusing if you think about it.
Scenario: Secret police want to prove that Alice accessed a secret volume whilst in transit.
Attack from paper (proof technique): ``compare the volume serial numbers in the relevant .lnk files to the volume serial numbers for all the volumes that Alice mounted [in presents of secret Police] and, if there is a discrepancy, he knows that there exists or existed a file system that Alice is not closing"
Problem: Alice may have allowed anyone to use her laptop during the flight.
Section 5.1 notes that a boot loader is now available in TrueCrypt v6. Was this as a result of the paper or work that TrueCrypt were already working on?
> Passwords for example require the user does not write them
> on post-it nots and stick them on the desk.
And this is why the password, as a stand alone authentication method, generally sucks. Don't get me wrong, I have lots of memorized complicated passwords myself, but the cold truth is the staggering percentage of computer users are not capable of memorizing and segregating properly complex passwords for each of the systems to which they require access. If your security process requires that everyone in your user community behave a certain way, you'd better have both a small user community and some pretty major consequences imposed upon violation of policy. Not even the entire body of the military does this extremely well (with a couple of notable exceptions), you can't reasonably expect groups of average users outside of milspec security environments to behave well.
> You can turn swapping off on Windows, but it
> requires a restart. On Linux you can swapon/swapoff whenever
> you like.
Er, this is not relevant to my point. In either case, you require the user to do something. Unless they're highly disciplined, they're not going to do it every time. Hell, people are accidentally shot by comrades in war despite months of severe indoctrination in fire discipline. If a well-trained Marine can forget to check a chamber every now and then, you can't possibly expect Joe Average computer user to remember to turn swap on and off. Well, maybe you can expect it, but given a user population above N, it's going to bite you in the hindquarters.
> Also the usage of the virtual memory is completely different on
> Linux and Windows... even if you have 2GB of RAM Windows
> will still use VM, however Linux only uses it once the
> memory gets 70-80% full.
If you're *really* worried about security, you should have more physical memory than you're going to use and no paging file in any event.
All this statement shows is that you *know* you're possibly screwing yourself if you have a paging file under Windows, and you're only *possibly* screwing yourself if you have a paging file in Linux. If you're worried about the paging file being a vector for attack, whether it is in use 10% of the time or 99% of the time isn't really relevant, is it?
> But there is no *enforcement* to write to the user's
> profile. Also the user would have to edit a registry setting
>to put the Documents and Settings folder on an encrypted
There's no *enforcement* to write to a user's homedir, either. By default, you can write to /tmp. Changing this requires the user (or the systems administrator) to do something.
Sure, correctly maintaining file security on a Windows machine isn't trivial, but conceptually there isn't much of a difference between the two cases (either way, you need to *do* something to make sure you're always writing possibly sensitive data to an encrypted file system).
> Yeah - my point is that Linux (and BSD etc) already *do*
> take this deny write permission. It is mature and it works
> - we don't have programs crashing because they can't
> write to /xyz... because they never could write to xyz.
Wow, uh, maybe I've worked different places than you have, but I've noticed more than one instance of horrible software running on a *NIX box that doesn't act like this at all. At least, not on a *NIX cluster that has been up and running for more than a year with active in-house development. Maybe you've been lucky and you've never worked somewhere where a non-security minded person had sudo access. But I digress... you're talking "out of the box".
Sure, out of the box a mortal user can only write to /tmp and their homedir. Well, out of the box a mortal windows user can only write to their profile. The major difference between the two is that most windows users don't run as mortals, they run as Administrator. This is just like having *NIX users running as root all the time. They could write everywhere, too.
And, if Windows suddenly magically disappeared and all the current Windows users jumped to either Macintosh, Linux, or your favorite BSD flavor, you'd see a high percentage of them running their home box as root, just like they run their Windows box as administrator.
> ACLs are not really a security advantage - as when
> networks get large enough they are far too complex to
> properly analyse. I have worked as a Server Engineer on
> both Windows and Linux, and I can say that general
> policy is to avoid ACLs is possible.
And having worked with both Windows and Linux *users*, I can say that as a general policy, you want your security software to be at "zero" to "within epsilon of zero" in the effort cateorgy for use, or your user population is going to use it improperly.
edit last comment to add:
> If you're *really* worried about security, you should
> have more physical memory than you're going to use and
> no paging file in any event.
And you should read Felten's paper on attacks against volatile RAM so that you know when you should actually turn your computer *off* even if you only use swap. :)
If your computer's location has proper safeguards against intruders, it can be rigged with proper
hardware, tripwires and such with lasers to signal an intruder is present, signalling the computer to shut down.
The basic problem with Truecrypt's plausible deniability is that a chunk of crypto-grade random junk is a very strong indication that something has been encrypted. Nothing else has comparable entropy. Even though TC fills unused space beyond the end of the system partition with equally random junk which can't be distinguished from a hidden o/s (if one exists), the fact that more than the fag-end of the last disk cylinder has apparently been left unused is a dead give-away.
The very name "truecrypt" is enough for me to smell a foul odor. It has been my experience when anyone develops something with the opposite aim, they give it a sweet smelling name. Wait and see, something big will come out in a few months or years about this so-called encryption scheme. Baseless claims, you say, just wait. Enough people will adopt it and trust it and will be lulled into the lofty claims of the software and then the report will come out.
Go ahead, trust the binaries.. ahahahahahahahaaaahaha
After all, it's TRUE!
I'll pass on this trash software and stay with GPG.
@Merlin: "why did you use Truecrypt, rather than the OS builtin encryption (MacOSX - Vault; Windows - BitLocker; Linux - DMCrypt)?"
See, there's your reason: I use Windows and Linux (and - seldomly, though - MacOS) and I need my containers to be compatible between those platforms.
Furthermore, I haven't seen a Vista variant yet that comes with BitLocker in use by anyone I know. (It only ships with Ultimate or Enterprise and mostly you get HomePremium or Business.)
@ no trash for me: I have personally compiled TrueCrypt and compared it with the downloadable binaries: they are exactly the same thing --> no backdoor, malware, virus, or anything else embedded.
Also, TrueCrypt is not something new, so don't expect anything coming out in the next few months.
No, it simply means you can't just use common applications like MS Word inside a "hidden partition" and maintain plausible deniability, anything that uses /tmp or the equivalent on whichever OS you're using will reveal the existence of the hidden partition. If you're just hiding data... not running apps, os's and the like, its fine. downmod for sensational and false headline.
They can already read your PGP and truecrypt encryption.
From a Washington Post article...
"Also, officials may share copies of the laptop's contents with other agencies and private entities for language translation, data decryption or other reasons, "
Assuming that PGP and truecrypt are on some of these laptops, it is reasonable to say that they are being cracked.
The paper uses an example of a human-rights worker in a hostile environment (e.g. an investigative journalist in Burma). Though personal safety of a journalist is an issue, it's an elected occupational hazard - protecting the local sources is far more important. As the article concludes, even perfect deniability is not going to work here - if Alice knows the password(s), they can be revealed, one way or another, and incriminate people that trusted her.
There is another way. Alice creates an asymmetric key-pair. She leaves the secret key at home, and takes the public one with her. She encrypts all her field work with the public key (subject to no-plain-text-autobackup, no-swap file, no-hibernation, no-keylogger etc. precautions). She will decrypt the information when/if she safely returns home.
Such scheme alone doesn't protect Alice at all against the legal or physical consequences of her clandestine work, but it does what really matters - it protects the sources. Alice can not reveal the decryption key that she doesn't have.
Hmm I´dont care about Hi-Tech attack I just want to make intruders job a little bit harder, so why I am talking like I don´t care BECAUSE EVERYTHING IS POSSIBLE (TODAY). Brains+Hands+Computer+Keyboard+some money=thats all you need
As to those who want the data to "burn itself away" with a duress password, its simply not possible.
Because computers are Turing machines, there's nothing they can do which cannot be simulated (except esoteric hardware like detuned radio receivers).
Now you could get lucky, and hide the poison pill REALLY well, and they might trip it before they back your data up. or they might not, in which case they have a backup to work from, and now you have some serious explaining to do.
>Like having Truecrypt installed in the first
>place... why did you use Truecrypt, rather than
>the OS builtin encryption (MacOSX - Vault;
>Windows - BitLocker; Linux - DMCrypt)?
There's a very good reason : it's cross platform. By using TCrypt, you can in a very user friendly manner keep your data encrypted (be it on an external HD or in a file container) and still work on it from any mainstream operating system (Win, Mac OS X, Linux/Unix). It's there, it's effective, it's reviewed... No need to write your own utility. This is a very good reason why, for genuine legitimate reasons (eg business security) you may want to use TrueCrypt.
Then play with the plausible deniablity features should you like so, it's not my business.
@ Patrick Cahalan
You might look into OpenBSD's automatic swap encryption.
All they need is good block-level encryption for the rest of the disk to go with it. I asked a friend to forward a list of suggestions I made, even though I am obviously not an expert on encryption.
http://unix.org.in/UNIX-misc openbsd bsd unix comp/messages/116613_Fwd-3A-Block-level-encryption.html
You forgot /var/tmp.
As far as "why not use the operating system's built-in encryption?" goes, for Windows, that is a very easy question to answer.
Why not use Bitlocker?
1. It is only "built-in" to the Ultimate and Enterprise editions of Vista. If you do not want to pay that much, and instead have Home Basic, Home Premium, or Business Vista, Bitlocker is not available. Even EFS is not available for Home Basic or Home Premium. Also, if you are using an older version of Windows, like XP or 2000, Bitlocker is not available.
2. In addition to Ultimate or Enterprise Vista, Bitlocker also requires a Trusted Platform Module. Some people do not like Trusted Computing, and even if you do not mind, the TPM still costs extra money you may not want to pay.
3. Truecrypt is open source. Say you like encryption software that can be peer-reviewed.
4. Bitlocker uses AES. EFS uses DES and variations. Truecrypt supports Serpent and Twofish, which are more secure.
For Mac OS X users, why not use FileVault?
1. Truecrypt is open source. Say you like encryption software than can be peer-reviewed.
2. FileVault uses AES. Truecrypt supports Serpent and Twofish, which are more secure.
FileVault does not do full-disk encryption, but neither does the Macintosh version of of Truecrypt. Neither supports older version of Macintosh.
The question is more complicated for Linux users. All major alternatives are open source. What is built-in may depend on your distribution's installer, but dm-crypt and loop-aes are generally the easiest to set up on Linux.
Dm-crypt, loop-aes, and Truecrypt all offer choices between Serpent, Twofish, and AES. Truecrypt offers cascades, but in Dm-crypt and loop-aes you can just create an encrypted partition within another encrypted partition.
One argument in favour of Truecrypt is the block cipher mode. Truecrypt uses XTS, which I believe is comparable to LRW, although I have not seen as much literature about XTS as LRW. The best you can do with dm-crypt or loop-aes is CBC-ESSIV, although loop-aes does have a nice multi-key mode.
On the other hand, I'm not sure how you would encrypt everything but the /boot device with Truecrypt. However, there is nothing to stop you from using Truecrypt in combination with dm-crypt or loop-aes. Everything but /boot is encrypted with dm-crypt or loop-aes, but something else is used for user files. Then Truecrypt must compete with EncFS and CryptoFS. EncFS and CryptoFS leak file system structure data, like number of files, size of files, and approximate size of file names. Truecrypt does not. However, Truecrypt containers are fixed size, while EncFS and CryptoFS can expand within the /home partition.
Of course for multi-booters there is the cross-platform argument.
For the open source BSDs, Truecrypt may not be an option. I've heard of people getting Truecrypt to work with FreeBSD, but for the most part, BSD users who want encryption do use the encryption system that comes with their operating system: geli fro FreeBSD, svnd for OpenBSD, and cgd for NetBSD. EncFS and CryptoFS may also be available.
Though what is a bit concerning is that many of problems described in the comments can totally avoided with steganography fs - though anyone who starts it, seems to stop relatively soon...
"... ...can totally avoided with steganography fs - though anyone who starts it, seems to stop relatively soon..."
Stego irrespective of the method has a couple of problems,
1, It has a low signal to bandwidth ratio.
2, There are numerous ways to detect it's use.
These two are not unrelated to each other. Essentially what you are trying to do is modulate an existing signal with another signal that contains the hidden data, the lower the effective energy of the hidden signal to the carrier signal the lower the probability it will be detected.
However if you have a low energy signal in the presence of a high energy signal your receiver needs a good dynamic range for the wanted low level signal to be clearly detected. And this is the problem if the level of the hidden signal is sufficient to be detected by a receiver then it can also be detected by other methods which is why simple stego is doomed to be found.
the next trick is to use a moderate amount of energy for the hidden signal but keep it's bandwidth very very small and spread it's energy across the carrier signal bandwidth using a spread spectrum type system. This is what was used for digital watermarking and as history has shown that was not a great success for a number of reasons both technical and pragmatic.
Realistically simple signal recovery techniques on static files can have around a 96db equivalent dynamic range or 1 bit in 16. however complex signal estimation techniques based around Fourier / Walsh / Wavelet analysis and signal averaging can beat that by a significant margin and can find most coding sequences that are likely to be used. It is no coincidence then that the likes of the NSA and GCHQ place adds for maths grads with those particular applied mathematics skills...
Stego can be used for transferring snippets of information but they need to blend in with the background carrier data or it will be spotted.
For instance in a picture with a block of colour and a sharp edge. Simple analysis of the noise in the colour region and the appropriate application of edge sharpening algs will pull out the fact there is stego and of what type etc.
So the next stage is adaptive spread spectrum techniques where the carrier is analyzed through a programmable filter and this is used to tailor the spread spectrum characteristics so that it blends in better with the carrier.
However this ends up sending truly tiny amounts of data (think 1 byte in 100KByte) that the effort is hardly worth it.
I think TrueCrypt's hidden volume feature is useless for plausible deniability, not just because of the paper in the post, but because it's virtually impossible to *use* the outer volume without the filesystem trying to overwrite the hidden volume. Even if the hidden volume is protected, when the outer filesystem tries to write over the hidden volume, use of that space will be prevented, making it impossible to put the outer volume into common use.
I've taken a different approach to plausible deniability. I completely filled my 300 GB external drive with random data by creating a truecrypt whole-drive-encryption on it. Then I dumped that and created 15 20 GB logical partitions on it. I then use just a few of the partitions, with different passwords. That way I can reveal just a couple passwords under duress and deny that any more exist. Since I truly don't have use for most of that space, this is plausible. This seems to me to replicate the main advantage of the Rubberhose filesystem under current software.
Of course, this doesn't solve any of the problems mentioned in the paper.
This is an old blog but some good (and some amusing) information. I hate to revive a nearly 2 year-old post, but some of these people are so paranoid they shouldn't use a computer at all. They should just keep everything in their heads and deny it. lol
What about a triple plausible deniability system? Keep Truecrypt's system of an obvious encryption hiding another encryption inside of it, but instead of ending it there, the hidden encryption is hiding another encryption, and that encryption is hiding a third encryption.
How easy then, would it be for anyone to actually determine that the evidence for any of the three hidden encryptions isn't all for the same hidden encryption? And of course if someone is actually willing to put forth the time and effort, perhaps a system that allows for an infinite number of hidden encryptions within other subsequent encryptions so long as they have adequate space available. How easy would it be then to determine through window's tell-tale signs the exact number of hidden encryptions and where indeed is the final encryption that has the sensitive data that they are actually looking for?
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.