The Security Implications of Windows Volume Shadow Copy

It can be impossible to securely delete a file:

What are the security implications of Volume Shadow Copy?

Suppose you decide to protect one of your documents from prying eyes. First, you create an encrypted copy using an encryption application. Then, you “wipe” (or “secure-delete”) the original document, which consists of overwriting it several times and deleting it. (This is necessary, because if you just deleted the document without overwriting it, all the data that was in the file would physically remain on the disk until it got overwritten by other data. See question above for an explanation of how file deletion works.)

Ordinarily, this would render the original, unencrypted document irretrievable. However, if the original file was stored on a volume protected by the Volume Shadow Copy service and it was there when a restore point was created, the original file will be retrievable using Previous versions. All you need to do is right-click the containing folder, click Restore previous versions, open a snapshot, and, lo and behold, you’ll see the original file that you tried so hard to delete!

The reason wiping the file doesn’t help, of course, is that before the file’s blocks get overwritten, VSC will save them to the shadow copy. It doesn’t matter how many times you overwrite the file, the shadow copy will still be there, safely stored on a hidden volume.

Is there a way to securely delete a file on a volume protected by VSC?

No. Shadow copies are read-only, so there is no way to delete a file from all the shadow copies.

Posted on December 2, 2009 at 6:16 AM111 Comments

Comments

Calumn December 2, 2009 6:30 AM

Common sense?
If you make ANY backup of a file, it can be recovered – why not an article on “the security implications of burning your unencrypted file onto a CD-ROM”

alfora December 2, 2009 6:44 AM

@Calumn: Because there is a huge difference between making a backup copy on purpose and being aware of this process and letting “the system” make shadow copies on its own.

Even if you are turning on systems like Volume Shadow Copy on Windows or Time Machine on Mac OS X consciously you still don’t know exactly when they will do what with your files.

Backup systems like Time Machine basically tell you, “Turn me on, i’ll handle the rest. Don’t think about having to make backups manually. I’ll do it.” So, basically you forget about them until you need them to get back a file after an accident.

pfb December 2, 2009 7:00 AM

This kind of problem is not limited to windows ;
Every system having snapshot capabilities will expose this problem ; at the filesystem level or below like lvm (lvm2 snapshots can be r/w, so one can theorically secure delete the file in every snapshot).
The problem is worse in filesystems using copy-on-write (Sun ZFS, NetApp WAFL or Oracle Btrfs). And of course they also have snapshots (at least the first two, didn’t check for btrfs) in addition to COW.
In those cases, it’s not that you may not be able in some cases to secure delete, you never could.
Now one can point out the inherent schizophrenia of trying to enhance the technical ability to secure long term storage of a dataset, via redundancy, versionning etc while maintaining the capability to make it completly disappear.

echobeach2 December 2, 2009 7:02 AM

You can delete these via control panel, system, sytem protection, configure, delete. Then wipe all free space.

banduraj December 2, 2009 7:06 AM

I agree with Calumn. This either falls under common sense or a training issue. At least with shadow copy, schedules are configured for when snapshots are taken. If a document needs to be secure, then it should have been secure from the start.

tb December 2, 2009 7:07 AM

In Windows, you can use EFS and/or BitLocker to reduce the likelihood that someone gains unauthorized access to the shadow copy.

IMHO, I think the shadow copy feature should have been restricted to operating system files only, possibly with a whitelist that you could use to include user-specified files/directories. As it stands, it mostly serves as a more annoying/difficult recycle bin. I see no compelling reason to leave it enabled.

Clive Robinson December 2, 2009 7:14 AM

@ Bruce,

I’m surprised you did not mention that it has implications for “encrypted” files as well.

If you have two nearly identical plain texts stored away as encrypted files. Depending on the encryption mode or type you can use one file to break the other file.

It is why they always tell you never to use the same pad in a one time pad system twice.

The same applies to stream ciphers under the same key.

Also block ciphers when used in odd modes.

As I have said before if you don’t know how your file encryption software works you may be in for a nasty surprise.

Belive me when I say that there are some devices out there that use good encryption in a bad way because the designers do not know any better.

Clive Robinson December 2, 2009 7:19 AM

Oh and whilst I remember,

Don’t think Microsoft knows what it is doing with encryption software.

Have a google for the X-Box and TEA cipher being used in the wrong way.

Any way the nurse has just put my lunch in front of me and for some reason the sight of it has set my recovery back a few days as I now feel quite ill again 8(

Enix December 2, 2009 7:48 AM

“If a document needs to be secure, then it should have been secure from the start.”

Blaming the user and plugging your ears to a problem doesn’t help. Of course training is the first line of defense in security. We train people to drive cars safely, too, yet there are still airbags and seatbelts.

There should be other options when training fails (we’re only human). In this case, preferably one that doesn’t involved a “scorched earth” policy of wiping the entire shadow volume.

Thomas December 2, 2009 8:02 AM

I don’t think ‘securely’ deleting individual files from a filesystem is possible.

Every time you open the filesystem explorer (whether it be windows explorer, nautilus or whatever) it creates a thumbnail of the document which may or may not contain enough information to be dangerous.

Every time you open, edit and save a file the application you use to do it is as likely to create a new copy as it is to overwrite the existing one.

Thats a shadow of the file, sectors containing the data which a ‘secure’ delete of the current file won’t touch.

Even if you save in-place, shorten the file and the unneeded sectors at the end of the file are up for grabs.

Some document formats these days are zip files, merely opening them for viewing may result in them being extracted to disk somewhere.

Then there’s swap of course…

Print previews, temp copies, disk indexers, defrags, reallocating weak disk sectors … more shadow copies.

I think it’s safest to assume that once the data touches your disk it’s there to stay.

MichaelGG December 2, 2009 8:12 AM

Doesn’t this happen on flash drives as well? As they have their own wear leveling mechanism and so on, won’t deleted data stick around until the writes get back around to that sector?

Np237 December 2, 2009 8:29 AM

Thomas is right; the only safe solution if you don’t want some files to be readable by others is to encrypt your entire disk, including the swap area, using safe solutions like dm-crypt.

It doesn’t keep the data safe against all kinds of attacks, of course, but at least you don’t have to wonder about what every manipulation could do to your data.

David December 2, 2009 8:47 AM

@tb: I’ve found the ability to go back to previous versions of a file quite useful on occasion. Backing up system files isn’t all that useful, since they can always be restored. Backing up user-created files is far more useful, since they can’t necessarily be recovered.

@pfb: It reminds me about one problem with the version control system Subversion, the lack of an ability to permanently remove things checked in by accident (either because they are sensitive, or unnecessary and very large). The designers worked hard on creating a VCS that wouldn’t lose things, and in general succeeded.

vedaal December 2, 2009 8:49 AM

Am not familiar with Volume Shadow Copy, but suspect that it won’t copy anything unless the file is saved to disk first.

(If this is not correct, then, someone , please point it out, Thanks!)

If the VSC really doesn’t kick in until a document is saved, then it might be defeated by using an encryption program that encrypts ‘current window’
(PGP, GnuPG with GPGShell or WinPT),
then wipes the ‘clipboard’ ,
and then save the encrypted copy.

(All formatting will be lost, so it might work for pure text, but not for Word or Open Office documents.)

kangaroo December 2, 2009 8:51 AM

Np237 is right. File level encryption is almost impossible to secure — you’re depending on the system to NEVER cache your data. You have to encrypt end-to-end.

Graeme Teesdale December 2, 2009 9:02 AM

Is this perhaps why the much maligned NHS IT project can’t promise to ‘delete’ your records from its systems. A design constraint perhaps, resulting in a failure demand process, creating complexity in the dependencies of data and backup management?

kevin December 2, 2009 9:06 AM

On the mac, if you know you want the file encrypted ahead of time you can create an encrypted disk image, mount that then save your files directly to it. That way no plain text is ever on the hard drive (except what you get with an unencrypted swap and the other issues)

Olive December 2, 2009 9:43 AM

This doesn’t make any sense. If the copy is never deleted, the drive would eventually fill up with all these shadow copies. If there is only one shadow copy per filename and does get overwritten each time a change is made to the working copy, then don’t say it is read only.

And this stuff about having to overwrite a file on disk several times to make sure is nonsense. If it were this simple, then why wouldn’t an application have to save a file over and over and over again, to make sure it was correct? If you could save more information in each spot then why can’t magnetic media regularly be used to store many times as much information as advertised??? That’s because they know it can’t.

HJohn December 2, 2009 9:50 AM

@Olive: “This doesn’t make any sense. If the copy is never deleted, the drive would eventually fill up with all these shadow copies. If there is only one shadow copy per filename and does get overwritten each time a change is made to the working copy, then don’t say it is read only. ”


I don’t think the point is that it never gets overwritten and that a file will be available forever. I think the point is that a “securely deleted” file can be recovered for an undetermined amount of time after you “security delete” it. This can be months. That is unacceptable.

My solution is to stick with XP and hopefully enough people will put pressure on MS to fix this. It should be pretty easy to fix, if they care to.

This is one more reason I don’t upgrade until an OS has been in production for long enough to find some of these things out.

bear December 2, 2009 10:05 AM

If you created the new file on an encrypted volume to start with (thinking the simple right click – new) I think you could negate this.

You could then encrypt the file before you send it to where you want to and delete it from the encrypted volume. If I understand the function of the shadow copy correctly, it would back up / save restore point information for the encrypted volume file but not the actual file.

Am I wrong?

Petter December 2, 2009 10:13 AM

Who relies on securely deleting individual files? I would hope no one.

Once sensitive data has been written to a disk, that disk is sensitive. Period. To remove sensitive information, wipe the disk.

paul December 2, 2009 10:32 AM

One solution that doesn’t seem to be mentioned is replacing the contents of the file (preferably many times) between restore points and before deletion. Doing that enough times will push the version of the file that contained the data to be protected off the back of the (effectively) fifo space containing shadow copies. Of course, that will also make it impossible to recover some things that you might want to recover, but eh.

greenup December 2, 2009 10:51 AM

Lots of good comments above; I especially like the one about “the inherent schizophrenia of trying to [have backups] while maintaining the capability to make it completely disappear.”

I agree with others that encryption must be “end-to-end”, as any time spent unencrypted presents vulnerability, but when is the “start”? Starting with an MS-Word doc, it is tricky to make sure it is saved in a strong encrypted form. The built-in encryption is not suitable, which leaves saving to an encrypted filesystem, which may not be the way you want to keep the file long-term.

Then there are the AutoRecover files MS Word creates… has the user taken care to make sure they are saved on the encrypted partition? When do they get cleaned up?

banduraj December 2, 2009 10:56 AM

@Enix: I think you missed my point. If VSC poses a security problem, then so do all forms of data backup. When backups of any form are in place, then data retention is not only implied, but implicit.

David Harmon December 2, 2009 12:08 PM

For the encryption issue, you certainly need to have some filesystem space which is not shadowed (RAMdisk?), for temporary files and the like. Pre-encrypted stuff should probably go there, likewise temporary copies from viewing, unzipping, etc.

Otherwise — yeah, it’s a general tension between two incompatible goals.

BF Skinner December 2, 2009 12:52 PM

@Graeme Teesdale “Bruce…. Would you like to suggest a solution?”

Bruce’s solution at one time encrypt encrypt encrypt. But in Between Digital Secrets and Beyond Security his
thinking appeared to change to acknowledge “computer” security wasn’t just a technical issue.

Security is a human condition that expresses itself in technology.

Case Study
A user, not mine, wrote classified material to an unclass system that was mine. Large system so nobody noticed it for months.
Large, highly redundent, high availability, database system. We spent weeks trying to find all the places that data had been
written to, log files, transaction files, backup tapes the alternate processing site, the off site tape set, the annual
national archive copy.

The designers had spent so much time making sure that data wasn’t lost that when the situation required it we couldn’t have a
high assurance that we disposed of all the contaminated areas.

Had that user simply exercised caution and paid attention to the document markings I wouldn’t have had to expend 10’sk$ and
feed 3 months of DLTs to the incinerator. (“They’re gonna reimburse for these tapes aren’t they?” I wish I could do the spock eyebrow
thing for questions like that.)

I used to wonder why every space ship has a self-destruct. Then someone said only the person who can destroy a thing is the one
who truly controls it. I think it was Paul Atreides.

Our data is the same way. If we can’t destroy it–we don’t control it. Ditto if we can’t preserve it.
If we assume a secure delete clears the data without understanding the system (or as Clive also says about the
encryption) our logic is faulty.

Without understanding we can’t preserve or destroy. How much of the system do we understand? How much control do any one of us really have on our systems?

Shane December 2, 2009 1:01 PM

Time machine and Windows volume shadow copy are tools to help the average user (with minimal effort) back up their data.

Did the world *really need an article that is about as informative as “if you make a copy of a file, and delete the original, the copy you made is still there”…?

If destroying a file to a near unrecoverable state is that important for someone, that someone can figure it out on their own. These tools are for the masses, not the paranoid, child prons, crypto-nerds, et al.

Let’s keep this in perspective people, it’s far more advantageous (and surely more desirable) for the everyday user to have simple backup / restore capabilities than it is to have a rock-solid way of destroying a file.

Shane December 2, 2009 1:10 PM

@Olive

“And this stuff about having to overwrite a file on disk several times to make sure is nonsense.”

It is not nonsense at all my friend. Look before you leap…

When a file is created, a descriptor is built describing locations on the physical disk where the pieces of the file have been stored, as well a how to put them together.

When a file is deleted, the pieces on the physical disk drive are NOT REMOVED. They stay there until a new file overwrites them (which is done purely by chance and availability at the time the new file is created). The file, when deleted from the file system, is simply ‘unlinked’. The description of the file and it’s pieces are removed from the filesystem, but the data IN those pieces remain.

Hence, overwriting the file with nonsense PRIOR to unlinking (or ‘deleting’ it) is the only possible way for someone to actually remove the file from the physical drive (without torching the drive). However, even doing this a number of times can still leave behind magnetic traces of prior data that was stored on the platter in that particular location.

Filesystems 101.

Olive December 2, 2009 1:43 PM

@Shane – that’s not what I’m talking about. This isn’t about deleting a file and the OS doesn’t really delete it but just marks the space as available and until then the data is still out there on all those sectors blah blah blah re filesystems 101.

Now, why don’t you explain what is meant by “magnetic traces”. That’s the BS I’m talking about. And don’t bother with links to sales literature some call “white papers”. I want you to explain what is meant by “magnetic traces”. The only way you have to even know what is on the disk is by the drive electronics. Go ahead and explain how you know the data the drive reports is present on a particular sector isn’t really that at all, but REALLY some secret hidden value that you know it is. And that you know this without prior knowledge of the previous content. I want someone to explain without using twenty year old references why you need to wipe a drive over and over and over again because there might be “magnetic traces”.

JimFive December 2, 2009 1:52 PM

I think this is what Olive is disputing.

However, even doing this a number of
times can still leave behind magnetic
traces of prior data that was stored on
the platter in that particular location.

and I think Olive is right. It used to be that there was enough space between the tracks on a hard drive that you could read the traces of previous data on the spaces in between the tracks. However, as you may have noticed, HD capacity has increased and the drives are the same size (or even smaller). This was achieved by increasing the density of the data on the drive. At this point there probably isn’t enough of a gap to read traces from.

JimFive

Shane December 2, 2009 2:16 PM

@Olive

“I want someone to explain without using twenty year old references why you need to wipe a drive over and over and over again because there might be ‘magnetic traces’.”

Sigh…

You don’t *really need to make multiple passes on modern drives (as far back as the last decade perhaps?), but the *thought (paranoia?) was/is that the head positioning for many hard drives could not be *completely relied upon to actually *write the data in the *exact physical location as the previous data was written, hence leaving behind magnetic traces of the previous data on either edge of the new data on the track (I liken it to cross-talk on a multi-track reel-to-reel or cassette recorder). The traces (theoretically) remain because the head positioning wasn’t exact, and hence there could be no guarantee that the head wiped all the previous data.

Of course, nowdays the space between the tracks on the platters is so infinitesimally small (due to vendors packing in the storage space) that there is really no way to recover usable data (ie – more than a bit here or there) from them once they’ve been overwritten once, and surely the science behind head positioning has advance far enough to stuff this under the tin-foil hat hood.

So yes perhaps overwriting something many times is (manytimes – 1) too many, but so be it. If the DoD has a best practice guide on destroying data on magnetic media, I have no problem sticking to it, overkill or not.

As for “Go ahead and explain how you know the data the drive reports is present on a particular sector isn’t really that at all, but REALLY some secret hidden value that you know it is. And that you know this without prior knowledge of the previous content.”

I think you are missing a foundational understanding of magnetic media here. There is no ‘hidden data’, and the magnetic traces (when / if present) are just that – traces. There are error-correction algorithms everywhere in your hard and software to differentiate the actual data from the noise.

Hrm… what was that toy with the little magnetic wand that you used to make hair-styles with little iron filings on some dumb looking paper face? It’s not always as simple as it might seem to get ALL the filings to play nice, is it?

Try this: record some type of noise onto a cassette tape. Now, record over it with silence.

$100 dollars says that if you turn that shit up loud enough, you’ll still here a faint trace of that noise coming through. In a hard drive’s case, this noise is ignored. In a laboratory, however, this noise can be used to piece together data that was there at one point in time.

But, like I said, I agree that those days are likely long over for magnetic hard drives.

Shane December 2, 2009 2:19 PM

Also:

“hence there could be no guarantee that the head wiped all the previous data.”

should read:

“wiped all *traces of the previous data”.

Bruce Clement December 2, 2009 2:20 PM

@JimFive “At this point there probably isn’t enough of a gap to read traces from.”

Is this because we are approaching some theoretical physical limit to magnetic data storage, or because our technology isn’t yet good enough to do better.

The advance of technology over the last 50 years shows how todays limits will often be laughable tomorrow.

If there is any trace left behind, it’s safest to assume that someone will work out how to read it and others will build on it to the point where it becomes reliable and affordable to governments and business.

HJohn December 2, 2009 2:27 PM

@Bruce Clement: “The advance of technology over the last 50 years shows how todays limits will often be laughable tomorrow.”


Ditto.

Goes back to something I said on another thread, about how just because something is too volumous to query or track today doesn’t mean it will always be so. (the discussion was over massive data collection and yottabyte processing limitations)

Put another way, collect data, or put the mechanism in place, to collect data at a time when you can say with honesty “we can’t do much with it”. Then when you can do a lot with it, you already have it and people are used to it.

Shane December 2, 2009 2:29 PM

@Bruce Clement

“Is this because we are approaching some theoretical physical limit to magnetic data storage, or because our technology isn’t yet good enough to do better.”

Well think about the physical size of a modern hard drive. It hasn’t changed much in the last 10 years (save for laptop drives), but logical space sure has grown. This is simply due to the increasing ability to work in smaller and smaller units as our technology advances. Hence, the same *size platter built yesterday is incredibly more dense in terms of the number of tracks it contains and the size of the heads that read them, whereas, say, 15 years ago the ability to make tracks that small and heads to read them just wasn’t there yet.

As for approaching a limit, doubtfully. You just need more surface area on the platters, and voila! more space! Although, I highly doubt we’ll be seeing any laserdisc-sized hard drives in the consumer markets any time soon 😛

Shane December 2, 2009 2:32 PM

For the record though, I don’t design or manufacture magnetic hard drives, so that was just a best-guess.

Roger December 2, 2009 2:49 PM

Just to clarify the confusion here:
1. If “deleted” data is not overwritten, it can be fairly easily recovered in software without modifying the equipment. Numerous utilities exist for this purpose. The success rate depends on how much use the disk has had since the deletion, but on huge modern disks, total failure is rare.

  1. However, a single overwrite defeats software-only approaches. Once an overwrite occurs, you need to pull the drive apart and start fiddling around with the electronics. This limits the attack to fairly capable and determined opponents.

  2. It has been demonstrated in the past that overwritten data could be recovered by replacing the processing electronics for the drive head — essentially a 1-bit A/DC — with a 12-bit A/DC, and doing some careful digital signal processing on the results. Effectively, a 1 bit overwritten by a 0 bit leaves a slightly different magnetisation level to a 0 bit overwritten by a 0 bit, and with a higher resolution reading this can easily be detected. This is the motivation for multiple overwrites.

  3. The claim that overwritten data might be detectable through track misalignment has been often made and seems plausible, but so far as I know it has never been demonstrated publicly. It would seem to require some other method for examining magnetisation domains rather than the existing read head, and thus is even harder than the A/DC attack. On the other hand, it would also be harder to defend against: if the wobble in the hub is systematic, once a sliver of track has slipped out of the band covered by the read head it can probably never be overwritten.

  4. It can be argued that changing storage technologies can make these attacks harder or impossible. It is an easy claim to make and difficult to disprove without going to the cost and expense of repeating the experiment with every new drive technology. The cheapest counter-argument is to note that the subject matter experts — the national intelligence agencies — have not changed their view that once a drive has contained classified data, the only way to downgrade the classification of that drive is total physical destruction.

  5. There is, by the way, a closely related and quite dangerous attack related to drive controllers that automatically map out bad sectors. If your data is stored in a sector that gets mapped out, overwriting is nearly impossible. Yet software-only recovery may still be possible with data recovery software that, in effect, repeatedly re-reads the same bad sectors and takes the average result.

HJohn December 2, 2009 3:00 PM

I think the larger concern here is over the ignorance of the average user (and by ignorance, I don’t mean stupidity, I mean lack of expertise or sufficient knowledge.)

Of course, VSC has a valuable use. As most users don’t encrypt their files, it is moot for them.

The overall point is that people may attempt to encrypt or securely delete information and have absolutely no clue that the file still exists or is in unencrypted form in the VSC.

Basically, for the average user who attempts security, the system does not do what they think it does, and has a back door to get their data that they don’t know exists.

Even well informed and expert users may not have figured this out very soon.

For most users, they may never know this. It may never affect them. But what they don’t know can hurt them.

HJohn December 2, 2009 3:03 PM

@HJohn: “Of course, VSC has a valuable use. As most users don’t encrypt their files, it is moot for them. ”


Update… most don’t encrypt or securely delete.

Dick Cheney December 2, 2009 3:55 PM

I can remotely control your webcam and see what you are typing. And about the MP3 player and table radio that you got by surprise … they’re watching and listening to you right now.

Oh, and the thumb drive you “luckily” found … SURPRISE! The Thumb Drive and your wireless mouse are sending me reports hourly!!

Roy December 2, 2009 4:08 PM

After overwriting with zeros, the probability of recovering a single bit is about .57, so the odds of recovering the entire byte are about 90 to 1 against.

On *nix machines, the ‘dd’ command is useful for reading from /dev/zero to write a file(s) to fill all available space. Then delete the obliterator file.

Bruce Clement December 2, 2009 5:00 PM

@Shane “As for approaching a limit, doubtfully. You just need more surface area on the platters, and voila! more space”

I was thinking more about data density than making physically larger drives again.

I started my career with chest high fixed disks and 5MB (2.5 MB / side) removable disk cartridges that were a cubit across. No thanks.

Nostalga be damned, I want storage that goes in my pocket.

Clive Robinson December 2, 2009 5:03 PM

@ Roger,

“The claim that overwritten data might be detectable through track misalignment has been often made and seems plausible, but so far as I know it has never been demonstrated publicly.”

It has but your beard would need to be a bit “badger” to remember it.

Back in times past “HD platters” came in packs about 16″ across, the operator used to press an “unload” button on a machine the size of a laudromat machine lift the lid, lift out the platter pack and drop in a new one close the lid etc etc. All for 20Mb of storage…

Like the floppy disks that followed ten years later the design of the system had a “servo mechanisum” that kept the head over the right place it was partly based on the drive mechanics and partly on the drive signal quality.

The result was that a platter pack from one drive would work in another drive but… the mechanical servo tracking could be half a track off and still work.

You used to be able to get hold of a funny looking device that looked just like those magnifers that you put ontop of a map with your eye to the lense. In the bottom was a thin glass cell (not unlike a calculatoe LCD display) that contained very small magneticaly sensitive particals that changed colour depending on the strength and direction of the magnetic field.

You could not only clearly see the tracks and half tracks but the actual data bits as well.

A very big chunk of the reason you cannot make sense of residual magnetic data easily these days is first of all the orientation of recording has changed and secondly it is heavily encoded with a large amount of inter symbol dependancy. That is an individual bit is effectivly encoded across five or more bit spaces on the actual platter giving multiple intensity levels not just a 1 or 0.

That being said if you put a hard drive plater into certain types of scanning electron microscopes then the magnetic domains and their intensity are very easy to see. The price of a second hand “Oxford Magnetics” SEM is quite low these days the question is would you be able to modify it (I suspect not).

Squeasing data onto drives has long past the MFM and RLL encoding of the late 1980’s. Vertical recording and complex trellise encoding are just some of the ways you increase the number of bits per square CM.

However a word of caution should be sounded. Many drives these days have their own on board CPU the program for which is stored in flash, it can be easily overwriten (if you know the magic codes). Thus it may be possible to upload a custom programe to get at the residual data etc.

Which brings me onto fault tolerance,

I’m glad you brought up #6 it needs to be said and often, some drives alow upto 30% dead space these days not just one or two cylinders hear or there, the cost of an extra head and platter are minimal by comparison to the advantages on high availability drives.

Again reflashing the F-ROM on drives makes all of this “hidden data” available to those that can do it.

It is not a question of can the data be got at but who has the keys to the kingdom via the flash rom etc etc.

Nick P December 2, 2009 5:34 PM

Overwriting a disk is totally unnecessary. A long time ago I had to worry about forensics and independently developed an idea that existed in academic literature: loosing encryption key = loosing all the data. So long as disk encryption is done right, one destroys 256-1024bits (key+administrative stuff) instead of, say, a thousand billion bits. The time taken was reduced from nearly a day to seconds. The scheme combined free/cheap disk encryption and software that generated master key from a password and a long random string, stored in a coprocessor’s memory, a usb key or on rice paper. Depending on whats used, all improve erasure time. Coprocessor improves assurance, while USB or paper reduces cost.

I’m still using this trick today to protect confidential data, but it’s been improved. A cheap embedded board w/ TRNG is wired via serial port (most trustworthy and portable type of driver). Embedded board generates keys and stores them in a secure RAM area. Upon a button press or authenticated command, keys are zeroized then overwritten w/ random data. Fault-tolerance protocols prevent single points of failure, but increase risks. I won’t describe them because I may have underestimated their risk: risk reduction by obscurity is better than obvious attack angles. I only vouch for security & reliability of two node cluster of coprocessors in same room. Additionally, if one deletes, other is told to delete as well. Total cost to use a crypto cluster to protect & zeroize terabytes of data: $500. Way cheaper than buying and destroying hard drives… more trustworthy than anticipating the next advance in forensics.

KC December 2, 2009 6:49 PM

Firstly, as the article mentions, you can exclude certain folders and files (they provided a link to http://msdn.microsoft.com/en-us/library/aa819132%28VS.85%29.aspx for this). I do agree, from a security standpoint, that VSC should be disabled by default for non-essential files, but I think for the ordinary computer consumer, being able to resurrect files thought lost trumps security concerns. (Most readers of this blog are not ordinary computer consumers.) So while I’m not exactly pleased VSC takes such liberties in “protecting” my files, Microsoft has no real reason to change its course.

Jay December 2, 2009 7:04 PM

@Nick P: assuming your crypto is done properly! If it’s not secure against a key recovery attack (XOR, or generally reusing the output of a stream cipher, are trivially breakable. Not properly sector-keying opens weaknesses too) then you can’t really delete the key the way you think you can…

Brian M December 2, 2009 7:33 PM

I would be most worried about “word” documents on windows, when it creates backups regularly so that non-tech savvy users, who refuse to save but once a day don’t lose their data when they reboot (or have windows/word crash). If you save into an encrypted volume, is word smart enough to either place the file on the encrypted location, or just not make a backup?

neill December 2, 2009 8:55 PM

really sensitive data should be stored on a removable medium eg usb stick or RW DVD etc and locked in a safe offsite so that the key and the data are at two different locations
if you wish to destroy the data anyways you either put the usb stick in an oven at 400F for an hour or the DVD in the microwave for a few seconds (in both cases avoid the fumes!) – no recovery possible
as for the swap – it’s a crime against computer speed anyways to have it, buy more RAM and disable swapping entirely

wo-pah December 2, 2009 9:57 PM

@neill:

Or you can just insert the USB stick under your foot and stomp. Personally, I’m surprised there’s no market in guaranteed easily destroyable thumb drives (fragile chips, voltage spike mechanism included, etc.) I suppose it’s tough to sell such things based on proven unreliability, though.

Nick P December 2, 2009 10:29 PM

@ Jay

Yeah, that the encryption works is an obvious requirement. I also stated that in my post. I don’t know anyone who’s beaten TrueCrypt or PGP at the cipher level, so they seem to be fine for this. The important part is isolating the key (or components), keeping it in RAM, and deterministic timing of the zeroize function. I’ve used an RTOS to ensure zeroize gets the most priority. The client devices need to keep the key in RAM and have swap/paging turned off as well. Or be in encrypted hidden OS partitions to make it more difficult for attackers. Security has to be baked into this scheme on many levels: physical; OS; applications; network; storage. Deniable computing is the hardest kind.

@ network scanner

No, Bruce is a pundit now: he usually talks about the problem, but doesn’t solve it. That’s what the comment section is for. 😉 If you want a midterm solution, see my post from earlier today. It’s field proven. Unless you can get a NSA IME, then my scheme is as good as it gets without total physical destruction of media.

neill December 3, 2009 12:44 AM

@wo-pah
stomping won’t do it, you might break the PCB inside but not the silicone chip – after all don’t forget it’s a piece of metal … might be even possible to read parts of shattered chips (eg electron microscope by measuring the beam deviation due to the electrostatic charge of the cells)
heat destroys the layers by ‘merging’ them, also if the insulator starts leaking due to heat all flash cells will be empty after a while …
usually NO manufacturer advertises the easy destructability of their products

Clive Robinson December 3, 2009 12:52 AM

@ Nick P,

Howdy I hope life is well.

A couple of points to add that I bloged about oh way back that other implementers should think about.

Firstly if you are going for top end security don’t just staticaly store keys in memory, all physical devices including dynamic memory suffer from “burn in” so make the keys dynamic.

A simple (but not totaly secure) method is to have the key stored in two or more parts that are brought together to make the key.

Just for ease of explaining take the key and halve it’s value and store it in two locations A,B. To get the value back you add A and B. Every few milliseconds a non interuptable interupt process gets a random value adds to A and subs from B. Thus the two memory locations change atomicaly and very very often to prevent memory burn in.

Oh and the system needs to be realy fragile in some circumstances so an operators “dead man’s handle” is a must.

Keys can be quite safely stored in bits in multiple juresdictions. So for an international company the master keys can be held under a voting (n of m) system with pre-arranged duress codes.

Thus it can be demonstrated that you don’t and cannot possibly know the key and that you cannot get at it.

Oh and the key holding box has it’s own public key system where it only holds the private key and the key shares are sent to it with time dependant codes to prevent replay attacks etc etc.

As I said not 100% but it adds a couple of 9’s to the end.

Clive Robinson December 3, 2009 1:03 AM

@ Brian M,

“I would be most worried about “word” documents on windows”

Not just because of the fact that MS writes them all over the place and frequently.

They have a lot of “known plaintext” in the document headers. Which makes cryptanalysis so much easier (and yes the NSA question has been asked and it’s an open question).

Also from a forensic point of view they have a habit of not removing “undelet” information from the file and that makes redaction and all sorts of other issues problematical as has been seen even the US Gov’s ears have gone red on occasions.

Petter December 3, 2009 2:26 AM

@Shane: You should look into “Overwriting Hard Drive Data: The Great Wiping Controversy” by Wright, Kleiman, and Sundhar.

It is not possible to recover overwritten data.

Clive Robinson December 3, 2009 4:15 AM

@ network scanner,

Although Nick P says Bruce is a “pundit” what he has not said is why, Bruce has to look that way.

There is the “Billy the Kid” syndrom, where young bucks feel they have to “out draw” the master gun slinger.

Now the problem is as I’m sure both Bruce and Nick P will acknowledge is “we don’t know enough” for certainty.

For instance there was the recent thing with the number of rounds of AES and the key expansion.

In practice it was just a matter of fact that did not change the reality of the use of AES (that is it is not a practical attack).

Various people argued one way or the other about what Bruce had said at one point or another effectivly to “score points”.

As I have said just a few days ago I know enough to know I don’t know enough and I will put my hand up to that every time.

I’m reasonably sure that most experts in any given problem domain will say the same.

People with “certainty” fall into two camps.

The first are those who have thought up an “idea” and then go through a degree of formal reasoning and experementation to give the belief in their idea a secure footing.

The second see corelations etc and belive they are right and don’t go through the formal reasoning process.

The first are usually only to aware that their reasoning may have faults or only describe a subset of conditions and thus offer them up for the review of all others and will then test the reasoning. If it passes that process then it becomes an established idea and may in time become a law or axiom.

The second tend to take the “artists” aproach, in that they take personal afront that you would question their idea and thus their “beutifull mind”, they hurl out accusations that the questioner does not understand the subject and often make derogatory remarks. When shown they are wrong they tend to just disappear back into the wood work waiting to get revenge.

Thus the first group tend to follow the Newtonian method to science and as a consiquence move the body of human knowledge along.

The second group tend to be stuck in the times of the acient Greeks.

There is of recent times a third group of people these are “engineers” they have a need to find solutions to problems (scientists tend to be the opposit in that they have a need to find problems).

They use experiance and well founded knowledge to make pragmatic decisions as to how to solve a problem. Often the are acutely aware their solution is far from perfect but they try to balance or mitigate the problems they know of.

When it comes to Information security you are nolonger dealing with “tangable goods” but “Intangable information”.

The real physical world of tangable goods have real tangable limits based on physical laws and thus tends to follow the bell curve and can therefore be modeled and investigated by known mathmatical models.

The intangable information world has no tangable limits to what can be done, this makes testing difficult at best, the bell curve might or might not apply and thus most of our mathmatical tools developed with physical assumptions (axioms if you will) may or may not produce valid results.

For instance you have to diferentiate information from information that is stored or in transit. When in store or in transit information uses physical objects and is subject to the laws of matter motion and relativity.

When not in store or transit information is not constrained by the laws of matter and motion just those of “probability” where everything is equaly probable.

Shannon came up with the idea of what we call “entropy” from the laws of thermodynamics. It is a concept that is sometimes difficult to get a grasp on for various reasons. In reality information is unbounded it is only when it comes into the physical world where it becomes quantified (ie bits) and constraind (mass/energy/forces) and thus you can look at “entropy” as a measure of “possability” that is unbounded.

We have few models that deal with unbounded non tangable items. And those we think we have need to be carefully tested for assumptions based on the physical world.

As a connsiquence we are currently “making it up as we go along”, which means we have a high degree of chance (not probability) of finding that when tested our models contain hidden assumptions that effect the vaidity of the result.

Is it therefore any wonder that the person who did most for Quantum Computing in the early days was an engineer not a scientist?

Roger December 3, 2009 4:23 AM

@Clive Robinson:

It has but your beard would need to be a bit “badger” to remember it.

Ah, my beard has a little badger in it … perhaps not enough, as I recall seeing the old “washing machines” when I was a young pup, but didn’t learn many technical details before we moved on to other things.

However, when I said:
“The claim that overwritten data might be detectable through track misalignment has been often made and seems plausible, but so far as I know it has never been demonstrated publicly.”
I should have been a little clearer. I believe it has been demonstrated publicly, and in many EE classes, many many times for floppy disks and tapes. I was unaware of it being done for old Winchester drives but I’m not surprised. But we occasionally hear the claim made for modern disks with advanced servo positioning systems. I don’t know of any demonstration for such a drive and the possibility may well be urban legend mutated from demos with floppy drives.

A very big chunk of the reason you cannot make sense of residual magnetic data easily these days is first of all the orientation of recording has changed and secondly it is heavily encoded with a large amount of inter symbol dependancy. That is an individual bit is effectivly encoded across five or more bit spaces on the actual platter giving multiple intensity levels not just a 1 or 0.

Which, of course, makes data recovery by DSP easier, which, of course, is the intended purpose!

Clive Robinson December 3, 2009 4:24 AM

@ Petter,

“The Great Wiping Controversy” by Wright, Kleiman, and Sundhar.”

The conclusions they come to have been called into question because of some underlying assumptions.

Thus I would be cautious about making the statment,

“It is not possible to recover overwritten data.”

As a famous Jewish Comedian once observed “The great Jewish problem is that one of them comes along and rocks the Gentile world by saying ‘that a’nt nescassarely so'”…

Information Science has the same “great problem”, so you could say it was a universal problem 8)

Roger December 3, 2009 4:54 AM

@Roy:

After overwriting with zeros, the probability of recovering a single bit is about .57,

I’m pretty sceptical of these sorts of claims. Firstly, as I mentioned above the recording technologies are changing all the time, so any given result remains valid for very little time. Secondly, all the ones I have seen do not really seem to be trying all that hard. In fact, they seem to be trying to prove it impossible and then only begrudgingly admitting that they did, in fact, get some data.

However, let’s take that .57 figure as gospel for the sake of argument.

so the odds of recovering the entire byte are about 90 to 1 against.

Umm, no. That is true only if bytes are completely random. Specifically, it is true if every bit has an equal a priori probability of either value, and each bit’s value is independent of all the others. Pretty well the only type of data for which that is true is encrypted data. For many types of real world data, we can do much, much better than that even in the face of a recovery rate that is so extremely close to random chance.

To take one simple illustrative example, suppose that we wish to prove that a certain known file either was or was not present on the disk before the overwrite. We can do this quite easily by sliding the file around possible starting positions and calculating an index of coincidence. (In this context, basically xor-ing them together and taking the Hamming weight, both of which are very fast operations.) When the IoC greatly exceeds the bounds of chance, you have found your text, even through that heavy error mask. And you don’t even need a particularly long text before the chance of a false positive is utterly negligible; even if we have no file system left at all and so have billions of starting positions to try, a couple of hundred known bytes is enough. That means that for many file types you can identify the location of all the file headers before you even start worrying about the contents.

At any rate, with a 57% recovery rate you most certainly can’t assume the data is effectively gone, rather you will need to make a case-by case assessment of your risk model. For example if the secret police suspected you of having the subversive manifesto on your hard drive and just want to be sure, and you did a single overwrite, then with 57% recovery rate you’re a dead man.

kevinm December 3, 2009 7:22 AM

Multiple overwriting to erase a disk is no longer required in the way it was in the days of MFM/RLL. However modern drives have complex firmware and it is possible that sectors can be mapped (logically moved) when errors are detected. There may be some data remaining on such bad sectors/tracks that the firmware will not allow you to access. For the paranoid the answer is multiple steps: (1) overwrite and (2) thermite

Tweezer December 3, 2009 8:40 AM

This is true for many corporate networks. Even if they don’t run VSS, many corporations utilize SAN technology. Many SANs have snapshot features much like VSS to facilitate backups.

Petter December 3, 2009 8:46 AM

@Clive Robinson: OK, I change my statement to “No one has successfully recovered data from an overwritten HD, despite many serious attempts.”

David December 3, 2009 9:46 AM

It seems to me that the disk-wiping controversy is meaningless without a definition of the threat.

If I move a file to an unexpected part of the file system, change the name, and mark it invisible, I’ve got very good security against maybe 95% of computer users. I don’t even have to delete it.

If I want security against a modern investigator, I can overwrite everything with zero bits once, assuming I can get to everything. Forensic experts aren’t going to be stopped if the bits are on the disk in recognizable form. It’s this level that’s potentially compromised by automatic backups.

If you want high security against the NSA now or forensic investigators in 10 years, I’d suggest multiple overwrites. It’s not like they’re really expensive.

If I think a 1% chance that the NSA might be able to read the file in 2025 is too much of a danger, I shred the disk, mix it with other, innocuous disk shreds, and melt the whole thing down.

I can think of use cases for all of these (the last being rather extreme, admittedly; something like the record of how Major League Baseball engineered 9/11). Pick your acceptable threat level, and pick your countermeasures.

Clive Robinson December 3, 2009 11:00 AM

@ David,

“It seems to me that the disk-wiping controversy is meaningless without a definition of the threat.”

Hmm look at it this way, even one bit recovered has the chance (not probability) of halving the brut force key search space.

More realisticaly what is the chance your “closed source” encryption software has not been properly written and the buffer used to hold your pass phrase gets “paged out” to disk by the closed source OS?

Even though the first part of it may get overwritten it might not.

The tail end of a human memorable sentance is likley to be more revealing than the front due to the nature of the way simple English statments are put together (“……….atOntheMat”).

The failure of programers to clear buffers before they free memory is legandary even when there is a clear existing track record of security failures due to this, they still do it, it still gets through code reviews, and your pass phrase on which everything rests gets put in the slack space etc on the disk.

It is the physical security equivalent of leaving the alarm and vault keys under the “wellcome mat” just inside the bank front door and the staff cannot be bothered to lock the door on the way home each and every night…

A “definition of the threat” is difficult because it is “to human” in nature and humans are unpredictable at the best of times…

This is why when there is obvious partial encryption on the disk it is worth spending a day or two (thats all it takes with a terrabyte HD) to try all the likely plain text found on the disk as the pass phrase….

rug December 3, 2009 1:01 PM

@kevinm For the paranoid the answer is multiple steps: (1) overwrite and (2) thermite.

Serious question: Why is 1 necessary, given 2?

HJohn December 3, 2009 1:09 PM

@rug: ((1) overwrite and (2) thermite):Serious question: Why is 1 necessary, given 2?


If I’m understanding correctly, you overwrite your deleted files (or unencrypted files after encrypted) so they cannot be software recovered. You thermite to destroy your disk beyond recognition when you have no need to continue to use the data stored on it.

In short, it seems if you wish to destroy the disk then overwrite is unnecessary with thermite, but if you wish to destroy the data but not the disk then you use overwrite.

I could misunderstand what someone else was talking about.

In my home state, a law was passed requiring certain entities to overwrite all disks 10 times, regardless of information stored, before selling, surplussing, exchanging, disposing, etc. Tremendous overhead (overkill) and cost, but it was the law.

JimFive December 3, 2009 1:14 PM

@ Bruce Clement
Re: “Is this because we are approaching some theoretical physical limit to magnetic data storage, or because our technology isn’t yet good enough to do better.”

My guess (and it is only a guess) is that we are approaching a theoretical limit because the magnetic fields from adjacent tracks are close enough together that there really isn’t a gap anymore. N.B. That doesn’t mean we are necessarily approaching the limit of HD density, however.

@Roger

While I’ve heard of the idea that you can read the “interference” pattern from multiple writes on a HD I don’t think it has been demonstrated on a modern drive.

JimFive

Nick P December 3, 2009 2:23 PM

@ Clive on crypto scheme

Thanks for the tips. I mentioned burn-in in the original post, but I’m not using secret splitting. Let me run my current method by you and see what you think. I simply have two locations to store the key and a variable that says which one is in use. Every so often, a function activates that does the following: move the key to new location; update variable that says which to use; overwrite previous w/ pseudorandom string. I can also do this easier with a pointer but I hate pointers. 😉 The result of this trick is no burn in occurs while there is exactly 1 key in one of two contiguous locations to zeroize in event of an emergency. Your addition and subtraction trick was clever, but I think this is simpler & easy to verify with a RTOS. Again, what u think about my “move it & overwrite previous” trick?

@ On Pundits and Politics issue

While I’d love to top Bruce one day as a security engineer, I doubt that will happen & it’s not a goal I’m working toward. Actually, I prefer less popularity: less in my inbox. 😉 The reason I called him a pundit has to do with the change in his publications. As a cryptographer or security engineer, his writings analyzed something, found the flaws and proposed useful solutions in the same or other papers. Today, he’s mainly focused on identifying problems and talking about them to great length. Solutions aren’t regularly mentioned and those that are end up too abstract to implement. In other words, he’s transitioned from a good engineer to an eye-catching writer, speaker and pundit. There are exceptions (i.e. Skein or TrueCrypt deniability), but this is the general rule.

Note that this is not a stab at Bruce Schneier. After all, I’m on the blog regularly, aren’t I? I was just indicating (in an admittedly unclear way) that anyone wanting countermeasures, secure solutions, etc. should look elsewhere. He does have a lot of critics, though, that claim he bashes decent countermeasures on the grounds that they are imperfect, like two factor authentication. Many people read that one and thought 2-factor was a bad idea. However, it was enormously successful at reducing risk: bad guys had to target small numbers of people in real-time attacks, rather than write once, automatically rob a million or something. So, I sometimes question the utility of some of his posts. Yes, Bruce, we know these things aren’t perfect. Yes, we know there are counter-countermeasures. Tell us something useful. Overall, though, I like his articles, esp. on the elements of psychology in privacy, security & the security mindset. I owe plenty to Bruce for helping create a nice foundation for my own knowledge in the field. I just wish he was still as involved in crypto and security engineering as before. With such an influx of amateurs into the field, it could use his guidance now more than ever.

plonk December 3, 2009 2:42 PM

I for one don’t mind seeing “common sense” and “training issues” being discussed on this blog.

Clive Robinson December 3, 2009 4:58 PM

@ rug,

“Serious question: Why is 1 necessary, given 2?”

Ahgh yup I see what the problem is.

kevinm did explain but it’s not obvioius unless you know what he is talking about.

Ok HDs are electromechanical items and they are to a degree unreliable (which is a given with most things mechanical)

Sometimes HD sectors don’t work properly or become worn out.

Now the $64K question do you through the drive out just because of one or two tiny defects or do you do something to hide the problem.

The answer is the latter, as you do not actualy write to the HD these days at the sector cylinder you think the micro controler on the drive does that. You neither know nor care in most cases where the microcontroler actually writes the data on the physical platter.

Thus if you have spare capacity on the drive a faulty segment address can be swapped with a good segment address.

The problem then is that the faulty segment sits on the HD platter from that point onwards without the PC being able top see it any longer so you cannot overwrite it.

Further as the HD micro controler does it transparently you are not even aware it has done it.

However the HD manufacture and presumably those in the know can change the code in the HD microcontroler flash ROM or access “hidden” low level software for testing etc. Thus can get access to these “orphand sectors” that the PC can no longer see.

Hence kevinm’s advice of,

For the paranoid the answer is multiple steps: (1) overwrite and (2) thermite.

Step one gets at the sectors the PC can get to.

Step two will hopefully get most (but not all) of the sectors available or not.

Unless used properly thermite is actually not as reliable as you might think. Yes it will destroy the disk casing but some HD platters are actualy made of glass…

That means that they will not actualy burn and in theory the inner platers could have sectors that could be read…

The way to use thermite is to have a “fire brick” box with a void atleast five times the volume of the drive but about the same physical dimension ratios.

Put the thermit igniter through a small hole in the bottom of the void, cover with a layer of thermite about the same thickness as the drive. Put the drive in with the electronics side up and then fill the rest of the void with thermite.

Place a lose fire brick over the top of the void and standing well back (or on the other side of a brick wall) set off the igniter.

Being in the void enough of the heat from the thermite reaction should be transfered to all parts of the drive and cook it down properly.

If you feel you need to “ginger up” the thermite a bit you can add “flare material” to the standard mixture. In essense flare material is very fine PTFE powder, and you need to add about 5-10% by volume.

To be honest my prefered method is to sledge hammer it a few times first to break it open then cook it down with a cutting torch that way you can make sure there are no usable bits left.

A word of caution unless you want to die a realy unplesent death from lung disfunction do not under any circumstances breath in the fumes from cooking the drive. Especialy if you are using thermite + flare material.

It is a most unplesant death (which is maybe why they use it in “bunker buster” FAX bombs)

Oh and by the way there are other metal oxides that can be used in the thermite mix apart from iron oxide. They produce different temprature profiles and “burn rates”

Contary to what you may have been told thermite is not an explosive. It does not produce an increase in volume. Thje drive however does contain parts that will produce an increase in volume by “gassing off” unless you fancy a “self brand” as well as lung disfunction stand well back…

PackagedBlue December 3, 2009 7:48 PM

For those who have never had the fun of destroying drives, here is all you need.

Hammer, flat head screwdriver, charcoal grill, fan helps. Open the drive by breaking the screw heads off. A large crowbar really helps to pry the cover off. Take cover off, put in fire. A fan can speed up the process, but not needed. Fumes are bad, as clive mentions. Blue fire, from the board! A normal propane torch will not melt the platters.

I used to go to the trouble of removing the platter from the spindle, but it is a pain, and not needed.

Perhaps one does not even need to remove the cover, but I have not tried it, as I can get the cover off in under a minute with the crowbar and screwdrive method.

Best of burning to all of you.

Nick P December 3, 2009 10:54 PM

@ Clive

Nice description of thermiting HD’s. I’ve, no occasion, used this technique when no disk encryption was used. Scrape and magnetize a bit, then thermite. The firebrick and PTFE were new to me. I mean, is anything really recoverable after thermite hits the platter or chips? Do we need to increase damage? (maybe i’m less paranoid than some suggest…) I also fried the chips, just in case some stored anything. I figured there was some deadly vapors, so we wore cheap masks from Lowes and stood a bit away. 😉

I think the solution to these erasable issues is red-black separation applied to memory and raw storage. What I mean is that everything in RAM or on the disk should be ciphertext. Onboard controllers/ASIC’s sitting in front of each medium should transparently encrypt/decrypt. Zeroizing all the data in RAM and on storage means killing some keys. The Air Force is investigating a strategy to use same RAM encryption to isolate Xen VM’s from one another and NSA’s IME does it for harddrives, but with other shit they wanted that might have extra risks.

I say we make an embedded computer that acts as an IME and has a zeroize button. There should be client software, a driver or NFS (GOD NO!!), to act as a file system for user. Information will be transparently sent to coprocessor, which encrypts it before storing on disk. Since disk is connected to it and a MILS architecture is used, one can ensure red-black separation with high confidence. Also, we will either clear client’s RAM after shutdown (i.e. Incognito Linux) or on a zeroize command or put an FPGA w/ crypto between processor and RAM (like Air Force). Throw the coprocessor and drive in a safe w/ tamper resistance and zeroize button on outside and you probably have enough time to erase all buffers and RAM as well.

Another thought was an additional killswitch, tied to operator’s body. If someone breaks in, they just yank their arm real quick, a signal goes from 1 to 0, and keys are erased. Would be hard to beat if the wire and resistance is carefully determined. The zeroize button and tamper resistance still there too. The idea is to make an attack w/ successful data recovery extremely expensive and risky, kind of like IBM’s coprocessor does. The difference is that one can protect many computers rather than 1 coprocessor chip. A whole chain of zeroize commands can be initiated by one button press or arm/body movement or anything else. Increases risk of data loss, but deniable computing is inherently very risky. Only professionals or very careful individuals should even attempt it.

Peter A. December 4, 2009 5:49 AM

Gosh, why termite?

An anecdote first:

A server crashed over a weekend. It wasn’t a particularly important one so no one of us admins bothered to drive there. On Monday a co-worker phoned a local person to go and see, push the reset button, cycle power or whatever. The person reported that the server ‘squeaks badly’. Well, someone had to drive there anyway. On arrival the co-worker heard what he decribed as a metal grinding noise coming out of the hard drive… Ouch!

He brought the drive to the office and we took it apart. What we’ve seen was quite a lot of fine metal dust all over the inside and loose magnetic heads with the tiny wires they normally hang on scattered in corners. The arms of the head set assembly were touching the platters’ upper surfaces. Either the spindle or the heads’ servo axle must have been skewed somehow – only God knows how as nobody confessed of dropping the poor server from the table or similar thing.

The lower surfaces of the platters looked nice and shiny but the upper ones were shaved off to the bare metal. The heads’ landing zone had a nice 2-mm wide ~0.2 mm deep ditch grinded into the platters.

After visual inspection we have powered the poor drive up (what else bored admins would you expect to do?) It spindled up nicely and then apparently tried to read something off the platters as it moved the servo back and forth many times, and then gave up and moved it to back the landing zone. The noise was indeed a spectacular one – at least for what you’d expect from a hard drive…

Ok, coming to conclusion now. I’ll bet no NSA is going to read anything form that disk’s upper sides (lower seemed rather intact). So why bother with thermite, toxic fumes etc.? Open the drive, remove platters and grind off 0.1 mm of all surfaces. If you’re going to do that on regular basis, a simple tool could be constructed: a motor and some way to mount the platters on the axle and a set of small grinding stones mounted on short arms properly spaced. Mount the platters, spindle up, put the stone set set between the platters press against the upper surfaces, move along the radius, press against the lower surfaces, move along the radius, done. You may want to fry the PCB as a bonus but I really doubt any on-disk data gets stored permanently in the drive’s controller. The builtin cache is most likely DRAM or maybe even SRAM.

Clive Robinson December 4, 2009 6:04 AM

@ Nick P, PackagedBlue,

All three of us must sound a little paranoid to other readers 8)

@ ALL,

However if you are still with me I will try to explain the reasoning and lastly and importantly why it has legal effect on people these days (insider trading SabOx etc) which means it cannot be “paranoid” from that perspecive .

Firstly the old sayings /saws of,

“What you don’t know cann’t hurt you”

And

“Ignorance is bliss”

Don’t apply to the modern world of “accountability in law” and thus to some types of information.

With regard to my personal view of thermite use etc you have to accept there is in most cases a time element to account for. Due to the fundemental laws of nature (thermal energy propogates at a speed defined by the physical properties ot the transmission medium).

Based on having had a look at some of the physics involved if time is not an issue then simply taking the drive appart and heating the platters up to a moderatly high temp for an hour or so should be sufficient for the magnetic domains to get “mushed” beyond readability (Important note Flash ROM is entirely different).

However the military and others use thermite to destroy info equipment for reasons of speed, reliability and very importantly integrity (explosives don’t destroy the info they spread it out over a larger area…).

Obviously speed is of importance if a diplomatic or other “sensitive information handling outpost” is about to be overwhelmed by the enemy.

Normaly the military have large biscuit tin sized kits of destruction materials that take about five minutes to prepare and use (the reason for this is actually based on “safety” and “availability” in a physicly arduous “boots-n-mud” environment).

However for some this five minute time is considered far to long and a couple of seconds the same. Thus as I and others have seen, drives and in line crypto kit actualy gets built into special safe like enclosures lined with thermal insulation (fire bricks) with the thermite built in around the equipment. And a slightly recessed emergancy “pull down” on the outside to set it off, as well as electronic and mechanical tamper detection triggers. Oh and a handy little plug on them to plug in a “remote destruction” signal (like a dead man’s handle). Once triggered the inside of these safes heat up to several thousand degrees (f) and will stay there for some considerable time (hours if not days) thus the physical destruction of the info device and thus the information it holds is “reasonably” well assured.

This by the way is one of the reasons for the fire bricks, it holds the thermal energy there long enough to ensure the job is done.

Again my view point is they are trying to solve the wrong problem the traditional “but reliable” way.

This “burn to ash” mentality arises from the old assumption that information is tangable. That is the the old mechanical cipher machines and paper code books had a physical presence and could be stored in tamper evident safes.

Under this (incorect view) even the most argumentative of people (think legal sharks these days) will have difficulty arguing you where not taking reasonable steps towards “ensuring confidentiality” when presented with a piece of “furnace slag”.

As an aside remember in law probability has little relevance to the outcome “argument is all”. Reasonableness as a realistic measure in law went out with the Clapham horse drawn omnibus. These days liability means “class action” and stacks of cold hard cash. Thus you might regard liability layers with the same view point of “gold rush” miners from earlier times.

In a large communications network using point to point encryption (as opposed to end to end), the intangable nature of information becomes apparent.

That is the loss or break of a single key (or pass phrase) may unlock all the information that has been sent through the network node. To the enemy that has had (presumed) access to the cipher text. Hence the reason for frequent key changes in such networks.

Thus you have a distinct difference between information storage and transmission. In that it is easy to say exactly what information has been put at risk and from what point in time with information transmission.

Thus one issue with information storage if not done properly, is effectivly all the information ever put on it may be locked under a single key.

Which is not at all good as I’ll explain,

You can see there is a very real difference between storage and transmission of information unless specific extra steps are taken (and again this is something those producing HD encryption software appear to be not aware of or are ignoring).

You thus might well ask “why does this difference exist?”

Which to answer brings us around to the important question of “what is information?”

Information it’s self is an “intangable good” it only becomes tangable (ie becomes subject to the laws of physics) when it is stored or in transit.

This intangability means that there is no “deterant cost” to duplication of information that is “available” (think how many photos of the Twin Towers there are in existance).

Thus you need to “lock information up when stored or in transit” this involves some method of making it unusable in a reversable way (encryption). And also reduce the costs of unlocking it for use (we have not quite got to the point where we can process encrypted data as though it was unencrypted, but people are working on it).

The issue then is what is the cost of locking / unlocking and the trade offs. There are several metrics you need to consider,

For instance the OTP has very low CPU cycle cost (XOR) metric but a very high key cost metric. AES on the other hand has high CPU cycle costs but very low key costs (128bit).

Which brings you around to the question of “force multipliers” that is how much leverage can you get from a single bit of information.

In the OTP when properly used a single key bit will only ever give you 1 bit of information and nothing else, and the oposit is true a single bit of known information can only give a single bit of key.

In the case of a 128bit AES key one bit halves the brut force work required to be done by the enemy to recover all of the data encrypted under it. But also a single bit of known information can also be used to help verify if a brut forced key is valid or not.

That is in the case of AES each bit that survives the destruction process has a “chance” of reducing the work the enemy has to do by over a half.

This force multiplier effect is one reason OTPs will always have a use no matter what other people might argue.

Now the thing is that a chance has a probability of happening. And in the physical world due to process costs we have probability “norms” and “extreams” which often “fall on the bell curve”.

In the information security world the probability is actually irrelevant it is the fact there is a chance or not, no matter how small the probability of that chance being in one state (secure) or not (insecure).

That is you identify all the chance “states” not the probability of an individual chance being in a given state (in a prize draw a ticket ends in one of two states “it’s won” or “it’s lost” the probability of the state is dependent on the number of tickets sold).

Thus the focus switches to what the effects are of the information falling into enemy hands to decide the cost of the loss of information.

The problem is it is even less of a “known problem” which is why we have such sayings as,

“For a ha’pen’th of tar the ship was lost”.

So apparant Paranoia may or may not be justiffied depending on the number and type of chances (not the probability of a given chance state) of an event occuring and it’s “potential cost” if it does happen.

Therefor in information security the prevailing view is,

0% chance = 0 potential cost.

And… due to unknown force multiplication and unknown ability of the enemy the inverse of this is effectivly assumed to be,

Not 0% chance = Infinite potential cost.

Which is why the “thermite safe” and “dead man’s handle” option is sometimes used.

Although arguably there is a chance it will fail so you could argue that the whole argument is nonsense!

But remember we are dealing with the “Human logic” of “Human perspective” with regards to the “Human Notion” of “potential cost” here which is sometimes called “peace of mind”.

Which is best sumed up by the supplier of solutions saying “the customer is right/King” or the more cinical “The customer gets what they pay for” and “a king can pay top dollar” 😉

Now “once upon a time” only the “state” had need of such measures. But times have changed laws have been passed with regards to “insider trading” and “auditing of responsability” to supposadly prevent certain behaviour patterns that some regard as illegal for whatever reason.

Where there is a law you get into chances not probability. The reason for this is “argument”. Frequently the argument is about “unlimited liability” which translated means “you lose I get everything you’ve got”…

Thus for a large organisation this puts them in the cross hairs of an unknown number of laywers, where they can lose everything on the chance that they are found to be in the “guilty state”. It is like being the pig in the farm prize draw and only having the ability to buy a limited number of tickets to protect yourself from being slaughtered for the price of your hide.

Thus the steps you need to take would in more reasonable circumstances appear “paranoid” but they are not when viewed from the perspective of “unlimited legal liability”…

Clive Robinson December 4, 2009 9:08 AM

Nick P,

Sorry for not reponding earlier had to caution somebody about a Troll…

With regards,

“I think the solution to these erasable issues is red-black separation applied to memory and raw storage. What I mean is that everything in RAM or on the disk should be ciphertext. Onboard controllers/ASIC’s sitting in front of each medium should transparently encrypt/decrypt.”

It is the way to go other than the “crypto” needs to be inside the CPU chip.

You see this on some microcontrolers where the internal flash rom is encoded/ciphered to stop somebody flipping the lid off and Scanning Electron Microscoping (SEM) or IR Microscoping out the program code.

However it has security issues (doesn’t always).

The CPU is activly talking to memory, even if the memory is encrypted you can still get a very good idea as to what is going on simply by observing the memory addresses in use on a vector scope.

It becomes clear what areas of memory are code and those that are data, further the general types of data by the number of consecutive bytes read or the number of times a given location is read. And if the data is “constant” or variable, and if it is “global” or local and all sorts of other things.

When you know where the code is you can have a good idea what is OS and which is program also you can work out what OS calls are being used…

Thus without further measures other than just data encryption the program and some of it’s data can be enumerated 8(

Even if you also encrypt the ram address bus you will still get an idea of what is happening simply by observing what memory is used read only and that which is used as read and write. And in the case of “stack” or “heap” memory it becomes clear as well with time and usage.

Further you can cross corelate events with IO actions so you can see the code load etc.

One of the things with “in line” encryption it is usually transparent at some point and this has a side effect of opening up exploitable information channels to an observer.

For instance consider point to point encryption -v- end to end encryption.

Point to point encryption in a network effectivly hides routing information, but without further measures simple traffic flow will analysis will give you the likley start and end point of a communication in a network.

Even if you do take steps to use in communication random routing you will see just by how much data enters or leaves a node where traffic is likley to be heading.

The solution adopted by the military for years has been “constant trafffic flow” that is every node point sends and receives traffic at either the channel capacity or a fixed fraction there of.

To an outsider they have no indication as to which bytes in the comms stream are “data” or “fill” thus easy traffic flow analysis is prevented.

Now you have the issue I keep pointing out but still gets ignored 😉

Efficiency is on the other side of the slide to Security.

That is the more efficient an object is the more side channels it has.

So you “clock the outputs and the inputs” at a fixed rate irespective of what is going on. This makes a timing side channel difficult to implement and of very very small bandwidth.

So much for communications, how about storage?

Well… it has it’s own issues at various levels. Like observing the address lines on the CPU watching the head movment and write amps etc will tell you a lot over time.

Further there is the issue of time…

In comms data is in transit and time has little meaning overall (this is an over generalisation). For storage however it has significant issues.

One of which is in line encryption has no notion of time other than it passing. Data needs to be encrypted under different keys, in comms time is not an issue the data gets a new key each time etc. But on a hard drive the data is stored for an unknown period of time. Unless an inline encoder works at a higher level it has now way to implement time sensitive encryption which effectivly means all of the data has to be encrypted under the same key…

This makes it sensitive to other attacks that do not effect comms only storage.

Thus you need not just point to point (inline) encryption but end to end (time/file) encryption as well.

Lets say each step adds a 9 or two to the end of the 99.9… but It’s not 100%.

Jonadab the Unsightly One December 4, 2009 10:28 AM

This is, on the whole, a good thing. Do you have any idea how many trillions of dollars worth of data that should have been retained have been irretrievably lost over the decades because the user, against all reason, overwrote and/or deleted it on purpose?

Yes, if you’re working with sensitive data you have to be careful. Duh. Hopefully you knew that already.

But the amount of information in the world that it would really be better to lose irretrievably rather than let it fall into the wrong hands is, frankly, relatively small.

I don’t see how shadow copies (or automatic versioning, like VMS has had for decades) is any different in this regard from automatic overnight backups or any other technology designed to protect against unfortunate data loss.

BTW, the assertion that there’s “no way” to actually delete the data is absurd. The operating system does not provide a convenient mechanism for doing it, for very good reasons outlined above, namely, that if it were easy to do people would do it when it’s neither necessary nor a good idea. But it certainly is possible to wipe the shadow-copied data.

At the extreme, you can always throw in a Knoppix CD and dd if=/dev/urandom of=/dev/whatever a couple of dozen times, then levigate the disk to the consistency of talcum powder, subject the remains to a potent electromagnetic field, and then disperse them over several acres of ocean. Problem solved.

And that’s the extreme to which which no normal situation would ever take you. It’s also possible to be significantly more selective than that in exactly what you wipe, leaving the rest in tact. If you have any idea what you’re doing, you can just wipe all the shadow copies of everything and leave the current versions in tact. (And this ought to go without saying, but people who don’t know what they’re doing at that level shouldn’t be working with dangerously sensitive data in the first place.) If you actually understand how VSC stores its stuff, you can just wipe the shadow copies of the sensitive data in particular, or even just certain versions.

HJohn December 4, 2009 10:42 AM

@Jonadab the Unsightly One at December 4, 2009 10:28 AM

Your points are well taken and valid. But I think the underlying problem is that when a user has deleted and overwritten a file, or encrypted a file and overwritten the unencrypted residual data, that it is not the operating systems place to keep a copy or an unencrypted copy without their knowledge or consent. Particularly that, in order to do what they are trying to do, they may have to disable or cripple their system restoration capabilities.

Murphy’s Law definitely applies here. If you get rid of VSC or VSC points so you can delete something, you’ll then need a point that no longer exists. But this is a topic for another day.

People shred hardcopy files on purpose all the time, even though it turns out to be a mistake after. That wouldn’t justify the shredder making copies of them in case.

Nick P December 5, 2009 2:18 AM

@ Jonadab

In addition to HJohn’s comments, you might want to skim the others. They have a few important points you miss, like how many tools ignore bad sectors on a disk which still contain information. Then there are theoretical and semi-practical attacks. Then there’s whatever’s left lying in buffers, swap files, and hardware memories. Lots of stuff to consider for those whose information is worth the cost to attack. Most people need not worry much, but some could loose big cash, be imprisoned or die if info got out. Why take the chance if easy solutions abound (see my original posts)?

@ Clive Robinson

I appreciate your reply. You gave great detail on the many issues associated with securing interactions of components in even a local system, while maintaining confidentiality and integrity. However, I think I may have not been clear on the purpose of my memory and disk encryption or the threat model. So, here it is as precisely as possible so I can get your most accurate review.

The purpose of my setup is to defeat forensic attacks after erasure. The RAM and disk are only encrypted so that destroying the key leaves only ciphertext. If someone has physical access to system, they can use methods you describe. The goal of my system is that, once the zeroize function is activated, the forensic guys can search disks and burnt-in RAM all they want and find only useless ciphertext. In effect, my scheme makes it where only the key for RAM or HD is sensitive, it’s only in one place, and is easily (read: very reliably) destroyed. The design is based on Kerchoff’s principle. The assurance of this scheme should be as strong as the crypto algorithm and implementation. Secondary goal, with less assurance, is for tamper-resistance to activate zeroize process if operator isn’t there to do it. Again, that part is low assurance. The main goal is preventing forensic analysis by my zeroize function and supporting the assurance by using a dedicated controller/card for crypto, with small RTOS to ensure predictable memory and wiping characteristics.

So, what I’m asking your review on is my high assurance goal: will seemless encryption of data, as in RAM cryptocontrollers or IME’s for HD’s, prevent recovery of all RAM and HD data if the key is zeroized from those controllers’ memory and never leaves the memory during operation? That’s the only goal. Not preventing a probing adversary or anything. An example usage is if security team alerts that a penetration is occurring or a severe compromise happened, then operator presses zeroize button to erase keys and never worries about ciphertext recovery. Analysis and cracking attempts happen at least 1 min. (worst case) after keys erased and system powered off, possibly thermite like u mentioned on controllers (storage has only ciphertext, so not sensitive). That’s threat model and the countermeasure. I figure this would be cheap, esp. with truecrypt and embedded PCI coprocessor, and could rapidly make data unrecoverable for later forensic analysis.

With this model and countermeasure, my security claim is that if crypto works, they have to recover most or all of the key that was overwritten randomly several times in static RAM. I think it’s very unfeasible if operator hits zeroize before cracking attempt and 99.999% impossible if it was also in a safe (cracking time fades RAM). This tactic is what I use for highest security data protection, with a few secret extras. This is core approach though. So, now that you know the precise purpose (preventing post-zeroize recovery of RAM or HD), what do you think of scheme? Do you agree that it deserves high confidence or am I missing something?

Fire Snake December 5, 2009 12:56 PM

Sorry, but all you are saying is that “if you make a copy of a file and then delete only the original file, the copy is still there (no matter how many times you overwrite the original file”.

Why “bash” Volume Shadow Copy service for this? I suppose that works the same on all OS. If you make a copy with robocopy, it works the same. Maybe you wanted to say “If you give the backup files privilege to a user, she can read any file, acls are not applied”. Bashing a backup tool for making a backup is a bit silly.

Clive Robinson December 5, 2009 4:50 PM

@ Fire Snake,

“Sorry, but all you are saying is… ”

Err no, that is the process that is happening.

It is the why of it and the consiquences.

The “why” is it happens by default because MS decided it was a good idea.

The “consiquences” are that a user may not know it is happening or how as it is both “new” and “non transparent” in it’s operation.

Giving rise to the issue that a user mistakenly belives that as they have removed a file from the visable file system (which they have) that it is removed from the PC (which it is not it’s in a hidden file system).

The problem is that the hidden file system is by design not easily accessable, further that disabaling this feature is not as easy as it could be.

The unfortunate result being that confidential information a reasonably technicaly savy user had thought had been removed from their PC has infact not been removed, nor can they easily remove it.

Whilst being nice for many many users, it’s problematical at best for those using confidential or above information.

Clive Robinson December 5, 2009 6:21 PM

@ Nick P,

Hmm this might take some time but first thoughts,

You say,

“The purpose of my setup is to defeat forensic attacks after erasure.”

Which is not what the system actualy does.

You have realised that “time” and “persistance” are two major issues and you are aiming specificaly at these by,

“The RAM and disk are only encrypted so that destroying the key leaves only ciphertext.”

That is the data is not destroyed it is mearly locked up in an “encryption safe”.

Thus the security of this is dependent on the ability to find the key to the safe.

As you say,

“If someone has physical access to system, they can use methods you describe. The goal of my system is that, once the zeroize function is activated, the forensic guys can search disks and burnt-in RAM all they want and find only useless ciphertext.”

Now you are arguing “security” after an event, on the assumption the key is unknown or cannot be recovered.

The key might actually be known due to EmSec and or “cache timing attacks” or other side channel attacks.

From what you say,

“In effect, my scheme makes it where only the key for RAM or HD is sensitive, it’s only in one place, and is easily (read: very reliably) destroyed.”
An attacker that is aware (for whatever reason) that your scheme is in use must target the key.

Which you apear to be aware of,

“The design is based on Kerchoff’s principle. The assurance of this scheme should be as strong as the crypto algorithm and implementation.”

So the weaknesses at this point are,

1, Data is not destroyed only encrypted.
2, The encryption key is stored within the system
3, An attacker is assumed to know that they will have to,

3.a, get the key by observation.
3.b get the key by applying preasure on the operators.
3c, recover the key from the hardware.
3d, using crypto analytic technigues to recover the key.
3e, prevent the key being zeroized.

Any problems with what I have said so far?

You appear to be aware of some of these issues when you say,

“Secondary goal, with less assurance, is for tamper-resistance to activate zeroize process if operator isn’t there to do it.”

Which may end up being the “low hanging fruit” of the final system.

So on to the details in the core of the system.

“… using a dedicated controller/card for crypto, with small RTOS to ensure predictable memory and wiping characteristics.”

Now the thought that occures is why an RTOS and what are it’s strengths / weaknesses / conectivity / etc.

“So, what I’m asking your review on is my high assurance goal: will seemless encryption of data, as in RAM cryptocontrollers or IME’s for HD’s, prevent recovery of all RAM and HD data if the key is zeroized”

The simple answer is no because the data is still there it is just encrypted.

The security under your assumptions rests on the inability of key recovery alone.

Extending the “zeroize” into all the RAM should be the next option. And also disguise where it has got to by writting random data not zeros.

Then there is the issue of the IO and other areas of mutable memory that is not the HD.

That is under a fault condition it is possible that at some point some or all the key or other critical plain text gets written to a place it can be recovered from.

Then there is the issue of the hard disk.

Inline encryption of the data needs to be thought about carefully.

At it’s simplest it is “code book” encryption that on a large hard drive is going to leave it subject to various well known attacks such as “chosen plaintext” or “known plaintext” attacks.

For instance a large part of MS Word files is “known plaintext” and under a simple “codebook” system the identical headers will tell you how many word documents are on the hard drive. Likewise with .exe files and many others. Also a large chunk of the OS is nearly always in the same place on a hard drive.

Thus your encryption system needs to take one master key and convert it to many sub keys and IV’s and chain encrypt each fixed block (ie a sector or sectors that go to make a logical block).

Further it needs to do “Russian Coupling” or equivalent on major features of the HD structure.

This has some quite serious implications because no longer is it simply acting as an “inline” encryptor it is going to have to cache blocks of data as well.

“With this model and countermeasure, my security claim is that if crypto works, they have to recover most or all of the key that was overwritten randomly several times in static RAM.”

That is true but as I said it has to be the right use of crypto. Dorthy Denning wrote a book in the 80’s on securing DBs and some of the pitfalls I would definatly have a read of it.

“Do you agree that it deserves high confidence or am I missing something?”

We always miss something 8(

Because we are all I hope human 😉

My main concern it the HD, the lower level it is encrypted at the more vulnerable it is to attacks (which is why I don’t assume inline encryption is enough). Thus it needs to encrypt multiple times at each level of the OS stack the more layers it works at the more likley it is to be secure.

HD encryption is a realy hard problem and few people get it right without considerable thought and experiance.

Idealy it needs to be built into the OS filesystem “proper” not bolted on which most solutions are.
Hope that helps a little?

Nick P December 5, 2009 11:28 PM

Yes, the concept is essentially an encryption safe but, if academic papers and crypto proofs are to be believed, loosing key to well-implemented safe equals secure erasure. This assumes key is truly unrecoverable. So, I’ll focus on your points covering this.

EmSec is definitely an issue. I claim no protection against that. For those who need it, I just throw it in an EmSec INFOSEC container safe with a filtered connector and powerline. Best I can do, there. As for side channels, they would have to target power line or serial port, which is only comm link. I use serial ports because they have no DMA, have drivers on every OS, and limited bandwidth (some covert channel suppression). The crypto controller is an event-driven, reactive, message-based system. A data packet is sent/received and processing is transparent. Some parts, due to caching, are unpredictable. I don’t see any obvious timing issues that would leak key material or plaintext, but there could be some. Worth looking at.

I like your numerical breakdown of threats. Let’s look at them. Strong isolation of OS, lack of DMA and easily verified comm driver should prevent observation of key. It never leaves memory and is in its own process (a session encryptor of sorts). Point 3b is safe: operators don’t know the key, half was generated onboard by TRNG and erased by zeroize, so they can’t help. Point 3c is hardware recovery: has burn-in prevention and entire system is flushed with random data, but getting all the buffers is tricky and time-consuming… risk here. 3d is cryptanalysis: timing attacks, chosen plaintext attacks and direct analysis of cipherttext. Second two defeated by correct cipher design/implementation, which I hope is the case, and the first might be feasible as described above. The last was interesting, as I don’t know how they’d prevent key from being zeroized. Several methods to start zeroization: tamper-indication; an external pushbutton (ouside the safe); and optionally a switch connected via string to operator that is pulled if operator quickly moves hand. This scheme is designed so that it’s easier to accidentally erase everything than to fail to erase something. All this said, a strong energy attack (i.e. microwaves) or EMP might send short-circuiting through the system. That’s plausible, but not in my threat model.

On your next points, I’ve been saying zeroize but I do mean overwrite randomly. Idk where I got that habit. I think it was several clients who kept saying it and I ended up cursed with it. I don’t use zero’s because some caches and compression schemes can sabotage that, while random data usually barges through these. So, the keys, RAM, etc. is definitely overwritten with random data and since the quality of random data doesn’t really matter (we are just substituting sensitive data with gibberish in memory), I use xorshift-128 because its blazing fast. Encryption mode, since its truecrypt, is currently XTS w/ 256-bit Twofish or Serpent with SHA-512. I know of no recovery techniques on TrueCrypt volume’s at the crypto level. Authorities have every motive to accomplish this and they are still trying to keylog, comb residual info in cold boot attacks, etc. No cryptoattacks seems like a safe assumption, but it could be wrong.

To look at the timing, which is VITAL, here’s erasure process: a few master keys (HD, RAM, seed pool) overwritten 3-passes; 1 pass attempted on sensitive buffers and caches (like truecrypt); RAM and other general storage overwritten 1 pass; “black” ciphertext storage areas may be overwritten. Master keys and memory, which contains most driver buffers, are most important. I’m focusing most efforts on keys because they protect the most sensitive info and I have to assume enemy is kicking in the door. As much has to happen in first few seconds as possible, mainly drive and RAM keys gone. The rest of the Red sensitive data is residual, fragmented, maybe useful, maybe not, so I try to clear it second. If I use thermite, it’s set for a delay that covers worst-case erasure of Red data. If I don’t use thermite, it’s programmed to try to clear everything for as long as it can hold out before enemy hits it, attempting to overwrite ciphertext as well. Assuming the sensitive Red data is overwritten and no burn-in occurs, it would appear to be unrecoverable against all known attacks within first few seconds. This design choice was inspired by DoD & NSA COMSEC products, which are all built like that (except that their zeroize means actual zeroes, while I feel random overwrites help disguise any remnants of key data in event of failure).

I hope that addresses or at least informs your concerns on recoverability after overwrite. So, how can we be sure its happening? Well, the first is addressed partly by the RTOS. You asked about that. I try to build verifiable systems (“correct by construction”) rather than guesswork like linux/MS kernels (“fail first, patch later, repeat ad nauseum”). A separation kernel or at least tiny, field-proven microkernel RTOS is small, easy to inspect, and usually has very predictable performance into hard-real-time ranges. I can be sure zeroize (there goes again… ERASE!) button is checked regularly and then deny every other process any CPU time as erasure occurs. So, RTOS is only used to ensure deterministic behavior under worst case conditions, which non-RTOS’s can’t really do. With separation and true microkernels, one also inherits a tiny TCB and can use secure decomposition to prove system security as a whole. Monolithic is too complex to have any assurance or verifiability. So, all “high assurance” designs of mine use specific hard RTOS’s. And it’s timing and memory characteristics help ensure the key stays where it should and is erased on time.

The last issue, unless I’m missing something, is the hard drive issues. Now, I didn’t actually do the truecrypt modifications: I made the system-level design, directed certain interface decisions, etc. Someone else did the code stuff and said only ciphertext touches the filesystem, while the plaintext caching occurs in RAM. So, it’s kind of like a pipeline: [Red app] -> plaintext as IPC message-> [crypto system] -> cipher text IPC message -> [black app/driver/filesystem/whatever]. Each thing in brackets is in its own process, partition, VM, whatever isolation mechanism used. The IPC is considered trusted for point-to-point delivery via memory copy (a reason for separation kernels or microkernel RTOS’s). If the implementation really does this, then only the key and that part of the RAM is sensitive (if arguments in other paragraphs are correct). Since zeroize button immediately kills keys and truecrypt buffer, black data should be unrecoverable. I guess my only concern with ciphertext on hard drive, again assuming dude implemented pipeline correctly, is that XTS mode is really safe. That has always worried me but CBC is largely unacceptable for reasons I’m sure your aware of…

So, that about sums up my thoughts on each of your points. Please give me feedback on each. There seem to be a few that I’m almost absolutely confident in. Others seem okay, have empirical support, but may be incorrect. Your single best argument against high confidence is the human factor, esp. considering the designer wasn’t formally trained. I agree it’s a very tough problem. A few years back I thought it would be so simple: just put this here; do this there; erase. Lol @ that wishful thinking. Your suggested trustworthy encrypting filesystem would be nice, but they seem more coupled to the disk at low level, which you also suggested increased risk. Can I trust manufacturers not to screw that up for me somehow? I’d rather it be full of ciphertext when it screws up, with a separate component doing encryption. That’s just my hypothesis, though. Also, I thank you for the mention of Dorthy Denning’s book. I didn’t know of her before and still haven’t found the book: she’s apparently a prolific author, writing a shitload of crypto papers. Is it “Information Warfare and Security?” No book yet but I’ve already decided to check out the paper on 20 key escrow techniques and her congressional testimonies. So, thanks for the reference.

Clive Robinson December 6, 2009 4:55 AM

@ Nick P,

Ah I’ve learned a bit more about your way of thinking 8)

As I now know what you mean by zeroize I’ll stick with it.

Another point you say,

“operators don’t know the key, half was generated onboard by TRNG and erased by zeroize”

I’m not sure what you mean by this from the system design spec aspect.

Is this to allow for some degree of accidental system tripping or for some other unstated system aspect?

That is,

Do you mean that the operators have a master key that then encodes randomly selected sub keys?

Or do you mean internal keys are split into two parts one half known to the operator the other half truley unknown?

Or some other arangment?

Key Managment is another area like HD encryption is fraught with subtal gottchas.

Which ever case what is the design reasoning behind it?

One thing you may not be aware of is TRNG’s -v- CS PRNG’s.

There are issues with TRNG’s that people try to hide behind hashing under a mistaken beliefe the “one wayness” of the hash will sprinkle “magic pixie dust” on the output (this idea appears to have gained traction since Intel engineers put it into their chip based TRNG’s in the late 90’s).

It has been known for over a quater of a century that electronics (thus TRNG’s) can be influenced by external forces/energy at quite low levels.

But even though I have been saying it for that period of time the academic community have only just “openly published” on it.

Put simply some of the bods at the Cambridge Labs subjected a 32bit TRNG to EM radiation and found they could reduce the effective entropy down from 32bits to 8bits…. that is a 1 in 4billion down to a 1 in 256 series of outputs (I hear an Ouch being said 8)

Thus no matter how you hash each output individualy you effectivly end up with only around 250 individual outputs…

And this by the way was with a very very simple attack.

It is basicaly an EM “fault injection” attack and there are ways of making it much worse than a simple attack (trust me on this one I’ve been there and done it on plastic cased hand held devices where monetry value is involved back in the 1980’s).

And yes I have said things privately to researchers when Fault injection became known in the academic community and more publicaly since DPA poped up in the mid 90’s.

So just about any active attack on things such as smart cards are applicable to your TRNG.

The nice thing about “passive TEMPEST” defence is it works in both directions so if you use enough of it hopefully the operators will be aware of such an EM attack is inprogress when the paint starts to blister off of the wall unless their eyes explode first 😉

Clive Robinson December 6, 2009 7:42 AM

@ Nick P,

I’m not at home at the moment so I cann’t give you the Denning refrence.

Which reminds me,

“if academic papers and crypto proofs are to be believed, loosing key to well-implemented safe equals secure erasure.”

Yes and no, we know that apart from a few systems such as the OTP all other systems are breakable.

The upper time bound for encryption breaking is “brut force” or more quaintly “British Museum” attacks (don’t ask it’s a Classics thing). It is easy to say the key space is 2^n bits and each trial takes m time. Which gives a nice easy bench mark for “normalisation”.

Some systems however it’s not so easy as although the key might be 2048bits long not every value in that range will be the product of a couple of primes let alone ending in “01” etc.

All crypto algs are known to have failings of one form or another, and that as you strengthen a design against one aspect another aspect is weakened.

So we know in theory that not only can the be broken but there are many opposing methods to defend against.

Now is the problem of information v the physical world. Entropy is about the only measure of information there is that is not tainted by assumptions of the physical world. A lot of our security assumptions are based on informations representation in the physical world such as bit’s of storage and bit’s per second of bandwidth etc (arguably quantum computing is working outside of the physical limitations which makes thoughtfull people cautious).

Entropy is a measure of possability. As my son knows with his lego bricks you have more possability with a pile of lose bricks than you do with an amalgamation of bricks (ie you are turning lots of little bricks into a lot less big bricks and thus the possabilities drop dramaticaly untill you end up with a single model).

The big enemy of encryption is actually the loss of entropy, that is the more structure in your underlying information the easier it is to attack.

For instance let’s assume you have a TTY link to a program. You can see when the program starts. Lets assume it has a menu of options with Y/N answers. It does not matter how powerfull your encryption system is if it only has two options to encode thus if each answer is encrypted individualy you have only two cryptograms heading down the wire which is a simple sbstitution cipher.

Thus if the program or as Kerchoff’s maxim has it “the system is known to your enemy”, then they don’t need to break the encryption to know what you are doing. Traffic analysis alone will tell them.

One of the problems with both databases and Hard drives (which are essentialy flat file DB’s) is they are highly structured and are used in a highly structured way.

Examination of a fully encrypted hard drive can be very like “traffic flow” analysis. In fact you will hear people put it another way when they talk about the data being put down on a drive like sedimentry rock being formed.

Unless care is taken then the enemy has the unavoidable structure of a HD visable to them.

Thus it’s not just the information/data you have to properly encrypt but the meta information/data that gives structure at all levels.

So at the physical layer of a hard drive you have,

1, Heads platters cylinders and sectors.

At the next level you have,

2, Clusters of sectors etc.

All the way up to the OS file level. At each level there is structure or further meta data added. Such as file size, file read/write/modification/user etc etc.

The more of this you have the less entropy you have overall thus the weaker the effect of the encryption is.

Simple inline encryptors only work on the raw information. Slightly better ones can work at the cluster level but that’s as far as it goes with inline encryption at the HD hardware interface.

What you need to do is actually pull the entire file system out of the OS leaving only the application level hooks.

Then at each decending level apply the apropriate “mode” of encryption to the structure and meta data at that level.

You need to ask somebody in the know about your chosen system such as TrueCrypt just what it does at each level.

For instance you mentioned the dreded NTFS. This is a caching hard disk system it has so much structure you sometimes feel you need a good grasp on “chaos theory” just to be able to understand the possabilities of the arising complexity.

I have personaly seen very little written on this subject in recent year but I can assure you that after a little thought and a few diagrams of FAT12 you will realize why this is an issue.

Which brings me around to how you deal with it.

Which I will post later just to let both your and my brains to relax and free ascociate a little as it’s Sunday 8)

Clive Robinson December 6, 2009 1:32 PM

@ Nick P,

One of the things about security systems is that the design process is very similar to safety system design which in turn has a lot in comman with old style project managment.

Basicaly it boils down to managing complexity and oddly the best way to do that is to add more complexity but in a highly controled fasion.

A quick explaination / example as to why,

An early automated OTP system suffered badly from timing side channels. That is you could tell which bits where changed and those that where not simply by observing the width of the output pulse. Thus it could only be used in “off line” mode.

It became quickly clear that trying to “control” the pulse width issue was neigh on impossible with highly reliable (GB) Post Office 600 relay technology.

The practical solution was to “double clock” the output. That is the output from the XOR function with the dodgy pulse width got put into a storage relay. The output of the storage relay was then clocked onto the line using the orthagonal clock.

That is by increasing the hardware complexity in a controled way an uncontroled aspect of the design was negated.

Hence the TEMPEST maxim about “clock the inputs and clock the outputs”. Thus timing side channels virtualy disapear from consideration. This is provided other error conditions are correctly dealt with which gives another maxim “on error start from the begining” thus the bandwidth available is oh so low (which indirectly gives rise to “on error abort”).

A similar system is used in hardware design (pipelining) but for different reasons (trade delay for clock speed).

So the best way to control complexity is to break the system down into the smallest distinct subsections you can whilst still retaining atomic behavior, and control the interfaces between them.

From what you have said I suspect you already do this the question is though do you do it to a fine enough level and within certain other constraints.

To decied this you analyse the states each subsection can be in.

The first rule is NO RECURSION ever.

The second is NO SHARED STATE ever.

Thirdly that ALL ACTIONS ARE ATOMIC (or fail safe).
Forthly, that an individual subsection can only be in one distinct state at any one time.

Fithly that (apart from under fault) the switch to the next state should be decided by, one input only and it should not be derived from more than one state back.

Sixth, feedback or feed forward should be avoided at all times.

So for each subsection, 1 state machine, with 1 stored state, back 1 state in time only and only 1 input and one output.

Which is in essence a Turing machine or simple finite state automaton.

Importantly it must have fault detection and each and every state must fail safe.

Where ever possible feed back must be avoided as this typicaly magnifies complexity and thus state uncertainty. Feedforward has other issues and likewise should be avoided.

Likewise the interfaces between subsections must be clear and fail safe under fault from either subsection.

Thus the design is in effect one or more “chains” where the state of all links (subsections) is known at all times and all faults stop the system in a fail safe mode.

It is possible to build the simple chain system into a “chain swap” system. That is a chain can have a selector switch in it that allows groups of links to be independently used of each other.

It sounds like a tough thing to do but surprisingly the result is usually better than all other methods.

The tough bit is either getting rid of feedback or sufficiently isolating it to keep the complexity managable.

But remember,

1, No recursion
2, All actions are atomic or to fail safe fault.
3, No shared state.
4, Interfaces are clocked.

just say no to lizard handlers December 6, 2009 7:49 PM

The Security Implications of Windows

There… fixed that for you.

Can I see the source code?
Can you see the source code?

No
No

Then where do you place your trust?

In a corporation?
In invisible, hidden code you cannot see?
Binary updates you cannot know for sure what they do?
Windows update which may uniquely send your computer based on its unique ID whatever it wants to?
Need I continue?

Make Groklaw your homepage if you feel Windows has any shred of security and is worthy of any degree of your trust.

PackagedBlue December 6, 2009 8:37 PM

Ouch, to @ just say no to …

I’ll bite on that post, as an OpenBSDer and a no GUI, lynx user for the web.

I hate to defend M$, but productivity is still somewhat tied to M$, especially in corporate worlds, although M$ has hit 7, crap out time, but it will be back someday.

Out of respect for open source, and its amazing progress, it works, but you still need to buy/retain/consult with massive brains/dedication, to still get serious security, that is how bad security really is. There is a good reason IBM is a monopoly, and linux is given a major pass at what it really is. Security is a private business and exclusive market.

Besides, what does the chameleon look like in the mirror? Depends upon who you are, and how you are looking at it, and how it sees you. That is really what M$ is about in the business world, in my humble m$ disliker opinion.

Clive Robinson December 7, 2009 6:54 AM

@ just say no to lizard handlers,

“Then where do you place your trust?”

Most users don’t put it anywhere, they don’t even have cause to think about it….

Although you are correct that MS is a major corp and supplies closed source OS and Apps and it’s patches are both closed and often excesive in size.

Does that make it inherantly untrustworthy?

Does it matter?

The simple question is in an unregulated highly competative market why would anybody trust any corp?

As for the supposed alternative “Open Source” in it’s various forms has many disadvantages as well.

For instance any moderatly complicated app is very dependent on the tool chain used to turn the source into executable code. Sadly some coders cut their cloth to the tool chain not the standards. Likewise to the OS. Thus taking the source and getting it to run on an OS can be fraught with complications.

Most users are not sufficiently technicaly minded to do a compile from source which is why Open Source is most often downloaded as a pre compiled binary.

Why should a user trust the package any more or less than one from a major corp?

When Open Source gets what the attackers regard as a significant level of usage I will expect them to concentrate on it as much as they do for high usage closed source code, and not much before that.

MS have improved their game, as can be seen by attackers going after applications from other organisations (Adobe etc). But from this it can be seen that they are still targeting the “big market” code which most users can be expected to have.

Info Sec is a funny (not so) old game, trust is not in the game play.

To the attackers it’s a numbers game. To code developers it’s always a game of catch up. To most users they care not what the OS is just the app they need to get a job done thus play the game. To geeks and gamers their hobby needs drive their desires and passions so it is a game of love. To observers and industry pundits the basic rules of the game should be obvious. So why do they behave like geeks in their outlook?

Beats me, I can see pros and cons in both camps.

But at the end of the day the majority of people want to get a job done with the minimum of pain and perhaps some pleasure. The attackers know this so that directs their activities not some other more esoteric reason.

Thus trust has little to do with the game.

Nick P December 7, 2009 12:25 PM

@ Clive on 12/07/09 6:54am

Why you even respond to the trolls? Just gives them a reason to keep up flame war attempts. Folks like us know UNIX was worse than NT for first 10-15 years of its life. Crashing constantly, security issues, inconsistent apps, poor exception handling, filesystem corruption, bloat… the list goes on. Many complaints in the famous UNIX Hater’s handbook still exist in some form today after almost two decades of work and nearly billion dollars invested. Systems like Multics, IBM zOS, VAX/VMS (esp. with A1 security kernel), and NextStep have little to no problems for most of their lifetime and met their intended goals almost perfectly in eyes of users & admins. Let the kids argue over which toy hacker-bait OS is better while the adults continue to get work done on the boring, mature and rock-solid solutions that worked, are working and will always work. 😉

So, remember: Don’t respond to trolls. Feedback is to them as good music and beer is to you. I think of trolls like mushrooms: keep them in the dark, feed them shit, and watch them grow up. 😉

Nick P December 7, 2009 12:30 PM

Ah, three huge posts in one day. It’s good to see I’ve got your brain going from 4AM to 7PM. I might have to put this reply in two posts. I’m just working my way down your points, as before. The master key is a combination of a system generated key and a password-to-key trick via PBKDF2 standard: 256-bit onboard secret key + 256-bit user-supplied secret = 256-bit master key that spawns keys used to do useful shit. If any of these three are lost, all protected data is unrecoverable. This is because I check entropy of inputs (rejecting weak anything) and use SHA-256 extensively. Every mix is a hash instead of XOR so that I have one-way-ness (assuming SHA-256 isn’t flawed. sighs). The zeroize command deletes the 256-bit onboard secret key, which is necessary to produce proper master key. There is NO recovery function: “that which we call a” backdoor, “by any other name would smell just as” bitter. 😉 The onboard key is in battery-backed static RAM, with optional redundant cryptocard to protect against whatever. Redundant requires different protocol, so I focus here on 1 card setup. Since I extensively use one-way functions, the loss of ANY 256-bit value should, in theory, make everything unrecoverable. I just assume torture is an option, so zeroize kills onboard key if not in use and if in use onboard and master keys, along with everything else.

I’m aware of the EM attack: you mentioned it before when we discussed EMSEC. It’s disturbing, esp. its simplicity & low cost. My design again has no EMSEC protection. Most clients and users don’t care for it, so if one want it I just shove it in a EMSEC desk-side container. Best I can do. And hopefully when their eyeballs explode they remember what the zeroize button feels like and start pounding it with their fist. 😉

On your second post, you seem mainly concerned about data leakage due to structural information remaining in ciphertext and make a few points about my main claim: encryption w/out key = zeroize. I think the comparison to public key is unfair, as most good block ciphers with decent encryption mode & 128-bit+ key are unbreakable in practice. Blowfish and IDEA have lasted forever, it seems. TrueCrypt supports using two different ciphers, if I choose. As far as HD structure being visible, it all depends on how TrueCrypt is implemented and whether XTS mode is secure. Judging from the content of your second post, I really need to reevaluate these to ensure one of those gotchas isn’t impairing security. There have been no successful recoveries to date, but that doesn’t mean there isn’t a bug waiting to be discovered…

Remember that mine is inline between two filesystems, not the ATA cable. User writes to virtual NTFS or ext2 filesystem which is transparently encrypted and stored on some other filesystem as ciphertext, with caching done within encryption driver (that I’m aware of). Several layers, as you suggest, would be better. I’ll keep that in mind when looking for other options, like maybe using my encryption scheme AND a self-encrypting hard drive w/ password, which maybe only known to my software and erased on zeroized. Application level encryption, stored on encrypted filesystem, on encrypted drive. As fast as everything is, all of this transparent encryption might not even slow things down. So, I might take the defense in depth encryption approach you’re suggesting.

Nick P December 7, 2009 12:36 PM

On your third post now. You and I seem to have same approach to dealing with safety: introducing manageable complexity that deals with faults and malice. My modular MILS or component architectures are intended for this: each layer represents a self-contained function of useful work, can be independently verified, and then the system (interactions of components) is verified as a whole. I use FSM’s for some components, but I usually take a functional approach to specify global behavior. With haskell, I can implement it direclty. Looking at your rules, I follow the first three whenever possible. I can’t say for sure that I do the fourth and fifth without looking at my designs.

As I said, I start with a functional specification. We follow the Galois Inc. and NICTA (L4.verified) methods of creating an executible specification in Literate Haskell. So, I can’t guarantee only one input effects things because that really wasn’t our goal during design. In a pure language like Haskel, we are basically defining relationships between input and output using functions. With monads, we can ensure a particular temporal order for certain things, but the overall spec is functional. I guess I try to follow your distinct state and one input rules but I end up finding an occasional (critical) component that requires at least two other components to work. Additionally, since my components communicate via message passing, a certain amount of feedforward and feedback are necessary. This is a side-effect of using pure microkernels or separation kernels, which do everything by message passing. Fortunately, the use of functional programming and the single-process, event model for each component helps us precisely define its behavior under any input. Using this, we can model the interactions at different levels of granularity and try to prevent problems coming from feedback, feedforward or dealing with multiple inputs. I think you’re suggestion of avoiding them is wise, but I find it difficult to imagine how in an interactive, possibly networked system. You could say the Haskell, modularity, and careful structuring of interactions are my way of coping with this hard reality.

You mention again the maxim on clocking the inputs and the outputs. Remember that this isn’t a hardware circuit but a series of tasks (event-driven components) running on a microkernel on a COTS processor. The drivers are either native or virtualized linux drivers (no way to avoid). I’m trying to make sure I understand how I’d apply your concepts to such a system. I aim for a fixed transmission rate for COMSEC applications and try to prevent clock skew for timestamping, but don’t really clock anything else right now. As applied to my scheme, which is inlined encryption between filesystems (RAM on left, HD on right), are you saying I should try to ensure a fixed amount of time is taken during encryption process so that key material cant leak? Or do you only apply this maxim to EMSEC issues? Are there any other side channels in a COTS software encryption system that your maxim can apply to? Sorry if I seem confused on it, but your the only person who tells me this. If EMSEC defenses weren’t usually classified, that would worry me.

Again, I thank you for your responses, past and future. While the design of this scheme prevents certain changes you suggested, I’ll definitely keep in mind your words on the states, inputs and chains on the next event-driven system I design. I’m always designing and redesigning, so it shouldn’t be long. 🙂 With your permission, I’m archiving this discussion along with the others like TEMPEST defense… a series of posts I’m still struggling to fully understand. 😉

Clive Robinson December 7, 2009 11:51 PM

@ Nick P,

First off my comment about key space was in refrence to the brute force benchmark, not your system.

It highlights the point that comparing unlike methods is dificult at best and that an underlying asumption can get you a lot of trouble (I’m also aware that you are possibly not the only person to be reading this 😉

With regards “clock the inputs and outputs”, it’s one of the few ways to stop / choke side channels down.

On the vague assumption everything else is ok with any given system, side channels and data structure are your next two main enemies.

This is because modern encryption such as AES is from the “current publicaly known art” is impractical to break (add usual disclaimer 😉

Therefor they are the only practical ways to get a grip on observed encrypted information that has been sent down a shannon channel these days.

And if you think about it the HD is effectivly the gift of a recording of that traffic to the forensic people (but thankfully without time detail).

Also “clock the inputs and outputs” is a design maxim for keeping complexity down essentialy the breaking into small functional blocks with “strictly controled interfaces” prevents time based side channels between the blocks (a win win which is rare in design, which further sugests it’s a fundemental requirment 8)

If you do not keep time based channels under control then the system can not only leak information it’s self it can become transparant to leaks through it.

If you have a hunt around on the web Matt Blaze had a student that demonstrated this via modulating key press times which made it all the way through to the network and could be picked up by passive monitoring of network traffic…

Time based side channels occure as a side effect of either inherent or induced problems in the design.

This is irespective of if the system is under attack, be it pasively or actively.

Thus you have to assume that each block be it software or hardware is effectivly trying to get a time channel through subsiquent blocks in the chain (oh and it’s another reason to avoid feed back/forward).

Both inherent or induced channels are going to be there with abundance with COTS hardware and OS’s simply because they both lack segregation. And they are designed for “efficiency” (maximum performance for your buck).

One of my maxims is “efficiency -v- security” as a general rule of thumb the more efficient a system is the more likley it is to have time based side channels that can leak information (have a hunt on the web for key leak by cache hits).

So security has to be built in from the start and arguably is not possable on the likes of commodity PC’s which is why you should always use “crypto software” OFF LINE or via a suitable “air gap” etc. My prefrence is two seperate boxes as this limits potential “store and forward” attacks.

So if data is clocked between blocks the ability to use a time based channel is reduced below the clock rate.

That then only allows an attacker to have a channel based on the clock period by either having data to clock or not having data to clock. If your interface control picks this up and treats it as an error then you further choke down the bandwidth of a potential time based side channel.

Having got one domain (time) for side channels under control you then have other domains to worry about but that is an area that is not yet publicly under investigation as far as I’m aware.

However also stopping down the time based asspect has a curbing effect on the other domains (frequency, sequency, etc).

Clive Robinson December 8, 2009 12:13 AM

If there are any security engineering students reading that are stuck for “original” ideas for a PhD. Project a suggestion for you.

You should be aware that some forms of CDMA are based on DS Spread Spectrum.

You might also be aware that to meet EMC masks a large number of commodity hardware uses spread spectrum to spread the energy of the EMC failing signals (sprogs) across a wider bandwidth and thus drop the level of the sprogs below the mask.

Now if you think about it one amatur InfoSec assumption is that putting a machine with sensitive information inside a group of ordinary machines will mask the signal.

Show how CDMA techniques make that a false assumption. Further show how the use of DS SS on such a machine actually offers a coding gain for TEMPEST.

Ajdin December 9, 2009 5:22 AM

I use cryptainer le to encrypt my passwords in a spreadsheet. The other day I unencrypted my file and while doing something the pc crashed. After restart Open Office offered to recover my last used document and to my supprise it recovered my spreadsheet containing all password. Now cryptainer le wasn’t running so Open Office must have stored the file in cache some ware. Is this common? If Open Office makes an instance of any file I open then I do not need to encrypt my files!

Moderator December 11, 2009 2:43 PM

For anyone still paying attention to this thread, note that the anti-Windows comments that keep being posted are all from the same troll, using sockpuppets to agree with himself. Please ignore them until they’re removed.

Pierre December 11, 2009 3:19 PM

Anyone about OS virtual memory?

Because as long as you read something those ‘modern’ operating systems are caching like mad cows -without any (technical) reason (plenty of untapped RAM).

Also, anyone about OS file indexing?

Why the hell is this feature opt-out?

Finally, anyone about antivirus scanning?

Yes, they index your disk ‘to find changes faster’.

Let’s face it, too many persons are so in love with your personal data that they take care of it on your behalf.

It might help -in case you want to retrieve a file mistakingly erased…

Nick P December 11, 2009 11:39 PM

@ Pierre

It’s kind of simple. Bruce and others have covered this stuff before. They who design most systems focus on what users want, which is mainly productivity and performance. Security and privacy have always been discretionary and privacy has always come last. That users have been conditioned in recent years to accept a continuous infect-patch cycle doesn’t help.

@ Clive

I appreciate your peer review of my instantly erasable system design. Overall, I take it you find no trouble with architectural and software design. The main potential and actual weak spots you’ve found are possibly incorrect drive-level assumptions made by inline software, hardware side channels, and potential timing channels. Personally, I don’t find these weaknesses bad: they are a small attack surface with very sophisticated skills required. I’ll continue to look into those problem areas and try to identify improvements, but I know from government R&D that one can never get rid of all covert channels: only reduce bandwidth.

Nice idea on the Ph.D. project. For me, I guess I’m focused on dependable distributed systems, where distributed might mean logical or physical separation. I think I’ll focus less on novelty than simply implementing old ideas correctly. Considering the defect rate in modern software, one should be able to earn a Ph.D. implementing a bunch of medium assurance real world software (e.g. a non-exploitable LAMP stack for web apps). Too bad. I guess I shall have no Ph.D., but I will have the satisfaction of making useful stuff. Personally, I’d like to see more medium to high assurance open-source projects, esp. BSD license. Then, corporations would use it in their own code and we’d all benefit.

Clive Robinson December 12, 2009 11:24 AM

@ Nick P,

“I guess I shall have no Ph.D., but I will have the satisfaction of making useful stuff.”

Yup and you’ll be in good company. I’m not realy sure what a Ph.D. Offers people these days (outside of Academia).

I guess it falls down to the old ‘Viva le Diff’ between engineers and scientists 😉

‘Scientists are people that look for problems by which to improve mankinds knowledge. Engineers are people that look for solutions by which to improve mankinds lot’.

Me I’m on the side of the engineers it’s more fun getting things done than it is to ‘publish or perish’.

“Personally, I’d like to see more medium to high assurance open-source projects”

Yup then people would start to see what properly writen code should look like. And hopefully stop chasing the wrong metrics…

A case in point – I once told a bunch of software engineers that I had to manage that there was a fixed pot of money for their Xmas bonus and it would be distributed based on three things only, to be eligable they had to meet their agreed time projections, then they would get points for each bug they posted they found in somebody elses code and lose double the points if a found bug was down to them and the pot would be split pro rata on points earned…

The bod that walked away with just under 60% of the pot (the team size was fifteen developers and two team leaders) was an old hardware engineer who used ‘Z’ and had previously only received a very small bonus because their previous manager counted lines/day written (and slurping up ego boosting).

For some strange reason my office door saw less traffic, and the high productivity code cutters (who got little bonus) left by March and where not missed by the team. Their deliverable times actualy dropped by around 15% with on target up by over 300% when I moved on the following August (I’m told their bonus pot went up by only 10% which was a shame they deserved better).

“esp. BSD license. Then, corporations would use it in their own code and we’d all benefit.”

I’m marginaly in favour of some of the other Open Source licences because I’m old fashioned and like “the body of knowledge” to improve under peer review as every bodies boat floats that little bit higher (it used to be normal in tangable engineering to do this).

However due to intangable “zero copy cost” I can understand why more traditional businesses would prefer the BSD licence. And there are some occasions it has other advantages (I’ve used it myself for “licence manager” systems).

As for your system, it sounds like you have a good idea of what you want to acheive (always good 😉 and importantly how to get there by an “engineering” aproach.

There may be other gotchers in there, then again there may not.

When they let me out of this hospital I’ll sit down with a large pot of tea and draw it out and have a proper look at it 8)

Nick P December 12, 2009 1:38 PM

Ah, you put a price on quality for the developers. Yes, that seems to be the only way to get it these days. And yes I prefer adding to body of knowledge (GPL-style), but it’s not compatible with capitalism. If I think I’d be better off with company’s exploiting particular work of mine, such as a rock-solid firewall, then I prefer BSD for that work. OpenBSD is a good example of this philosophy, which is why their OS and tools (OpenSSH & OpenSSL) replaced crummy alternatives all over the world. BSD allows a good idea to penetrate businesses w/out restrictions. No guarantees: just greater assurance. 😉

Sorry to hear about you being stuck in the hospital. Seems like you’ve been saying that for quite a while now. Hope you get out. Don’t bother drawing out my scheme, though, unless you are thinking of using it yourself. I was getting feedback on the original for use in making the next one. I’m not 100% confident that RAM-based Linux doesn’t touch a disk & that the driver only talks to my coprocessor. There’s just too much complexity. I’m creating a new design built on the MILS architecture I espouse to increase isolation and I’m going to try to implement it on OKL4 or QNX Secure Kernel. If I get over my own recent problems, mostly financial (frackin’ recessions…), then I’ll start designing/building it and let you draw that out.

Before I build that, though, I’ll be building a few medium assurance dedicated appliances: Tor; high assurance text/VOIP chat; firewall/vpn appliance on OpenBSD (been done before, but not by moi); truly isolated, but convenient, web browsing using dedicated cheap 2nd PC, OpenBSD gateway for sharing and KVM switch. If you want to review any of these when built, perhaps for personal use, just ask and you’ll get it free of charge. 😉

Nick P December 14, 2009 1:10 PM

@ kraloyun

You cut and pasted my first paragraph into your own post. Was there something you meant to add?

@ PackagedBlue

Yeah, that would be ideal. Personally, I’d like to see it even more low-level: OpenBSD team developing a Botan- or Crypto++-style library. I’ve found that security-critical, real-world applications strongly depend on relatively few components to maintain security properties. Usually compression, networking, crypto, storage, XML parsing, etc. If these things are correct, then the application’s overall security improves. I’d like to see the OpenBSD team build rock-solid versions of these, which we could then put in many OS’s or applications. It would be a few million lines of source in all, but I think it would be worth the effort to shore up the trusted computing base.

kraloyun December 22, 2009 3:23 PM

Yeah, that would be ideal. Personally, I’d like to see it even more low-level: OpenBSD team developing a Botan- or Crypto++-style library. I’ve found that security-critical, real-world applications strongly depend on relatively few components to maintain security properties. Usually compression, networking, crypto, storage, XML parsing, etc. If these things are correct, then the application’s overall security improves. I’d like to see the OpenBSD team build rock-solid versions of these, which we could then put in many OS’s or applications. It would be a few million lines of source in all, but I think it would be worth the effort to shore up the trusted computing base.

gongcho December 26, 2009 6:43 AM

This of course, is entirely of the point.. and in fact, Clive and Nick have had great interesting contributions.. but just for the record Clive… To be someone so knowledgable in some certain fields, your spelling abilities are absolutely appalling 😮

Clive Robinson December 26, 2009 4:58 PM

@ gongcho,

“To be someone so knowledgable in some certain fields, your spelling abilities are absolutely appalling :o”

Unfortunatly I’m somewhat dsylexic which I’ve been told (over 40 years ago) is a side effect of something strange in the way my nervous system is not cross wired in the usual way. What the modern diagnosis would be I have no idea.

What I do know is that it got one heck of a sight worse when I was attacked on the way to work one morning back in 2000 and had my head karata kicked into a post supporting a road sign by a college student.

This resulted in amongst other injuries a full fracture of the lower jaw right on the point of the chin. I was told by the maxiofacial surgeon that it is the hardest bone in the body to be broken in that way and was extreamly rare on people who are still alive.

To add to the fun it developed significant complications and a serious infection resulting in repeated surgury, which on one occasion I woke up half way through (which is realy good for waking in a cold sweat every few days).

There has also been other damage resulting in all sorts of interesting physiological medical problems including difficulties with sleeping, migrain like head aches, memory loss and other symptoms similar to mini strokes, proneness to debilitating sinus and inner ear infections, loss of taste, teeth and properly working saliva systems, and most unfortunate of all occasional loss of conciousness on excertion for no redaly apparent medical reason.

All of which is a bit anoying at the best of times.

Unfortunatly due to this I spend a lot of time under the less than tender administrations of the UK’s National Health System. One way or another they have managed to give me a couple of DVT’s and PE’s with the result I now have to eat rat poison every day.

The other side effect is having to access the Internet through a little mobile phone when the nurses etc are not looking (otherwise in some NHS hospitals they take it off you as a “valuble” or “prohibited item” and lock it up untill you are discharged 8(

You can not use a laptop or note book because you are not allowed to plug it in to charge (officialy because it’s not been safety tested by a hospital electrician…)

However ypu can pay the equivlent of 5USD/day to use a bed side system where you pay the equivalent of 1USD/minute to make or receive phone calls. And the last time I tried some exhorbitant rate to access the Internet with full “kiddy lock” on which prevents access to this and most other blogs etc (and most Google searches).

Unfortunatly the mobile I have does not have a built in spell checker. I’m waiting for a reasonably good Android phone with a browser with built in spell checker to replace this out of contract phone.

When I do the spelling issue will go away and will probably be replaced by some other problem (such as effect/affect of/off etc 😉

Nick P December 28, 2009 5:42 PM

@ gongcho

Thanks for noticing. 😉 The point of the blog, aside from showing off Clive’s mastery of grammar, is producing good ideas on many different issues. Glad you’re having fun. 😉

@ Clive

Holy shi-ite! That explains why you seem to be in the hospital all the time and why you insist on using a mobile phone for internet. Wish you the best on recovery, friend. Perhaps one day you will be able to type those insightful posts while making full use of Word’s spellchecking capabilities. I think I should have bought you a copy of OpenOffice for Christmas. You’re worth the money. 😛

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.