Alex March 26, 2019 7:20 AM

FWIW, Windows 10 has something built in that will erese the laptop and remove all of your apps and personal data.

They have an option that’s slow enough to make me think it’s overwriting the empty parts of the file system, although I haven’t verified that.

Roboticus March 26, 2019 7:41 AM

I spent about 5 years trying to convince a computer repair shop I worked at to overwrite drives on machines we refurbished instead of a simple format and reload. “It is too complicated.” was the answer. I left because of the increasingly shady business practices, but I also know first hand they were one of the better ones in the area.

Petre Peter March 26, 2019 9:43 AM

To me, it seems that the word delete has lost its meaning. A clean slate can only be provided by authorities; the rest is self incrimination.

Rj March 26, 2019 10:56 AM

Makes a good case nowdays to just use only ssd’s with data security erase feature built into the firmware. With that, you can securely erase a terabyte drive in a few minutes, since the drive’s internal representation is encrypted anyway all they need to do is destroy the key securely.

I have used encryption on my backup server for over 15 years, so I can offsite store old drives for archival and not worry about data theft during physical transit or physical storage. It has the additional nice feature that if I keep the key on a separate device, I can remotely erase an unmounted and unpowered drive by securely erasing the key, which is much faster than wiping a disk would ever be.

tfb March 26, 2019 12:14 PM

Is there a good reason why it’s still possible to buy portable devices which don’t store everything bar the decryption key encrypted, and then destroy the key on too many access attempts? I’m sure there must be.

Phaete March 26, 2019 12:27 PM

We have several business sectors where they need a certificate of destruction for every storage device that is abandoned.

Physical shredding is the best answer, as there are quite some methods to retrieve overwritten data like realigning the read head to read the residu signal.

Vladimir March 26, 2019 1:50 PM

I can’t say I’m surprised. People still use 123456 as password, type passwords in public places and even on national TV (to be fair that password was 123qwe :))

And that is not limited to end users. It did not happen once that company computers and servers went out with some interesting data.

Jesse Thompson March 26, 2019 1:54 PM

@Rj and @tfb

Just keep in mind that your “encryption = unreachable” idea only works for as long as that encryption scheme is secure.

The best one can hope for in that circumstance is that that will buy you enough time to denature all of the data on the hard drive (make it unuseful to anybody who might want it).

Harris S Newman March 26, 2019 3:02 PM

I saw they recommend using a drill press to destroy drives. I’ve been arguing that this only destroys part of the data on the drive, not all the data as NIST would require us to do. Shouldn’t one render all the platter unreadable, not just part?

vas pup March 26, 2019 3:24 PM

@Phaete • March 26, 2019 12:27 PM
“Physical shredding is the best answer.”
Yes, in particular for non-magnetic media which could not be degaussed.
May be melting in microwave oven or dissolving chemically are other options.
Shredded pieces should be as small as possible.

JohnnyS March 26, 2019 5:49 PM

For SSDs and spinning disks these days, dd is your BFF. You can use it to write zeroes to all normally writable bits on an SSD or spinning disk, and for most uses that is sufficient.

If anyone wants to get data from those devices after that, it will take a lot more technical resources than the typical cyber criminal gang: And if someone with the kind of resources needed is after you, then you have bigger problems.

John March 26, 2019 7:10 PM

I read the article but I don’t grok powershell. Is the author claiming these files weren’t even “erased”? That seems incredible to me. I’ve obtained used hardware and had to at least run testdisk/photorec on reformatted hard drives.

MrC March 27, 2019 12:20 AM

@Phaete: Please correct me if I’m wrong, but my understanding was that reconstruction of the contents of a zeroed drive from their magnetic remnants was a purely theoretical attack, and that no one has ever actually reconstructed a file from a zeroed drive this way. Can you point me to an example of someone actually doing it?

Re: Physical destruction — Although nasty fumes are involved, I’d think that burning to ash/melting to liquid would be more absolutely certain than shredding. (Although reassembling/mapping a shredded drive sounds to me like science fiction, so it hardly matters.)

You don’t need to pay for shredding for HDs with ceramic platters. A solid hammer blow to the case will shatter them into a million pieces. Hear a tinkling sound like broken glass when you shake it? Yeah, that drive’s done.

@Rj: My understanding is that SSD firmware “secure erase” features (and SSD firmware in general) are often poorly implemented (and the key may not be stored securely in the first place). Moreover, they’re all proprietary and unauditable, so I wouldn’t trust them.

@JohnnyS: The problem with securely deleting SSDs (assuming their “secure delete” function is inadequate) is that they’ve got a whole bunch of unprovisioned blocks that they use for scratch space during delete operations, rotating in and out of provisioned space for wear leveling, and replacing provisioned blocks when a cell fails and goes read-only. Nothing you can do from the OS level, including dd, can force a delete of the unprovisioned blocks. You can’t even force a delete of the provisioned blocks since the controller may just swap them out and flag them for later deletion. But they can be read by unsoldering the flash modules or hooking the drive up to a board that forces it into factory mode. If your data is also encrypted in software (e.g., LUKS, VeraCrypt), the worst case scenario is a copy of an old header with a compromised password lingering on where you can’t delete it. If you’re not, the worst case scenario is your most sensitive file lingering on where you can’t delete it. (Note: Bitlocker uses hardware encryption when it can. Also, Microsoft has your key…)

Jon March 27, 2019 1:01 AM

@Phaete & H.S. Newman:

Physical shredding is expensive, and most data just isn’t worth that much to attempt to recall. Putting a good-sized (say, half-inch) diameter hole in a platter is, if nothing else, guaranteed to make sure it’ll never spin again (and what a flying read head does when it hits a hole is not something I think they’d appreciate).

True, if you were the CIA and the disk held Russian nuclear weapons data, shredding (and then incinerating, and then mixing up the ashes – USA Classified SOP) would be better, but knocking a hole in the platters will, generally, work just fine. Also is a lot cheaper, faster, and can be done with stuff from Home Depot. Metal shredders are specialty equipment. A drill press (or even a hand drill) isn’t.

Bending the platters is another fun way of making sure they’ll never spin again. Yeah, they can still be picked over bit by bit microscopically, but that’s a gigantic hassle.

More apropos, I am recalling an experience of some years ago when I personally bought a used laptop at a junk store, and noticed rather a lot of crap on the hard drive. Trying to log in as a ‘different user’ magically got me all kinds of interesting business information.

I phoned up the business in question, and they sent over someone to make a copy of it all just so they knew what had escaped.

As Mr. Schneier pointed out, this is nothing new…


JohnnyS March 27, 2019 9:22 AM


Absolutely agreed. The SSD disks are still at risk even after a good dd wipe. The only mitigating factor is the (current) cost and awkwardness of actually getting into factory mode.

But I’m looking at those devices such as the “PC-3000 Portable System” which can access factory mode and these are little boxes with ports on them: They are expensive, but as we’ve seen in the past, clever hackers can often replace such systems with open source software solutions. (Example: Asterix replacing Nortel switches, FreeBSD/Dummynet deployments replacing costly WAN simulators, etc.)

So perhaps soon someone will put a Raspberry Pi into a box with the right connectors and be able to use that to access factory mode on SSDs. That may make it worth the effort for a villain to bulk buy used SSDs and look for secrets.

If I want to get rid of an SSD now, I will start with “dd” and end with “Big Hammer”.

Phaete March 27, 2019 9:50 AM


Shredding is only expensive if you don’t shop around and use an IT firm or similar for it.
Just drive to a metal recycler firm and ask if you can throw them in the shredder (and possibly film that).
Most i ever paid was a case of beer.

Ex-Oracle DBA March 27, 2019 10:33 AM

I work for a big online retailer who just migrated their data warehouse from Oracle to their own internal services (no points for guessing who). The decommissioning of the old hardware that Oracle was running on required the physical destruction of over 100,000 spinning disks. Things like these are huge programs which take an insane amount of money and time. The actual destruction and certification will take several months.

1&1~=Umm March 27, 2019 10:51 AM


There are a number of sides to this argument that are often not mentioned but are realy the root of the problem.

The first thing people want is not to loose the use of things and the easy way to do that as in life is ‘keep copies of everything’.

The second is people want fast response out of their computers, the easy way to do that as in life is not wasting time ‘throwing out the garbage’.

The third is as in life it takes way longer to put things away in an organised way than it does to ‘just put things anywhere’

Importantly though it takes much more time to put things away when as in life you start to ‘run out of unused space’.

With the cost of storage droping to the point it only makes sense to price it in Giga (thousands of millions) or Tera (millions of millions) bytes. Thus you can see why, from a user perspective,

1, Data duplication is good.
2, Not house keeping is good.
3, Unorganised storage is good.
4, More storage is good.

Now lets look from the data confidentiality perspective,

1, Data duplication is bad.
2, Not house keeping is bad.
3, Unorganised storage is bad.
4, More storage is bad.

Thus in theory ‘What the market wants the market gets’. The reality is though ‘What producers marketing people think is good is what gets made’. As the marketers look for ‘mass market’ as they are easier to sell into niche / specialized / minority markets either don’t get produced for or if they do at a disproportianatly high premium.

So if you want reasonable levels of confidentiality / privacy / security / secrecy or what supports it, you have to in effect pay a lot more if there is a product, or go without and thus have to make your own mitigations if there is not.

Which brings about the question of knowledge… Put your hand up if you think you can cover all the bases in the ‘required mitigations’ for the entire life cycle…

You might have noticed I’m very firmly sitting on my hands… That’s because I don’t believe anybody can ‘cover all the bases’ not even a large team of experts. In fact every one should think likewise, because of times arrow.

Lets just assume that you know –which you actually can not– all the classes of vulnerability there are out there. Lets also assume that by some magical process you could cover them all today, what about tomorow or the day after?

The point is the defenders are allways working in the past and the attackers in the future, which leaves a gap you just can not cover.

The only thing you can do is mitigate as broadly as you can the known classes of attack and keep your fingers crossed.

The broadest form of mitigation is segregation, not just seperation or authenticated access control.

History shows that segregation of any form is hard and the greater the complexity of any system the harder it is.

Thus you have to look at the very fundementals of the system break it into parts minimize not just the complexity within those parts but flows of information into and out of them.

So saying things like ‘Full Disk Encryption at the disk’ is insufficient because the implication is ‘one key’ which ‘all users need access too to use the system’. Further the key is in use as long as any of the data is in use. Which is why FDE or other Disk Level Encryption only protects when the drive is not in use and the keys have been securely deleted.

The problem is for most systems you can buy the keys are not securely deleted or worse they are not securely generated. Humans are realy bad at remembering things accurately even four digit PINs defete many people every day. Thus the chance of remembering 256bits of truley random data is in reality a compleate non starter. So FDE like most encryption just moves the problem somewhere else, it does not actually mitigate it let alone solve it.

Just about every mitigation you try to come up with either is totaly ineffective such as ‘security by obscurity’ or it pushes the problem else where.

The problem with pushing the problem elsewhere is it creates new communications channels be they informational or physical. In either case they need to be made secure which just increases the complexity, thus likelihood of introducing a vulnerability.

Many years ago now our host @Bruce Schneier mad the observation that as far as encryption was concerned we had sufficiently secure algorithms, what we needed to do was go on and work on sorting out the key managment issues.

To be honest since then I don’t realy think there has been much effort let alown significant solutions found. We’ve primarily ended up with some lowest common denominator solution of CAs. Using asymetric encryption in vulnerable hierarchical human systems, which have compleatly unsuprisingly, have failed repeatably as history had shown they probably would. But to make it worse the ideas of David Deutsch appear to being getting ever closer to fruition. If they do then our current asymetric public key algorithms become very vulnerable…

Thus even though we can see an entirely new class of vulnerability approaching us, we have no choice yet, but to build systems around what we have even though the predictions are it will be vulnerable long before it’s expected service life is over.

Thus we actually know we can not realistically have confidentiality now or in the future without some other way more physical mitigations.

Thus information storage needs to be in a vault, and storage that is broken or at it’s end of life needs some realy serious ‘ashes to ashes, dust to dust’ treatment.

Boy scouts used to get told that before leaving their campsite to ‘Burn, Bash and Bury’ their garbage, which in many ways was sound advice. These days they are told to push the problem somewhere else, much as we do these days in the ICT industry thus our information ‘gets recycled’…

Time to start the forge and get the hammer out, carbon footprint be damed, go have some excercise if not fun 😉

Alyer Babtu March 28, 2019 12:34 PM


For what it’s worth …

Could user friendly data versioning address points 1-3 ?

E.g., 1., no need to explicitly duplicate with another file, just keep committing your updates to the file in question. For 2., 3., the versioning db does the housekeeping and organizing; the accounting problem reduces since there is basically only one of each thing, leaving the more natural, non-computer problem of conceptual relationships and their implied organization, i.e., what/why is something “a one” and not a heap.

This relieves the burden on the user of managing the multiplicity in the change history of data, and so frees mental and creative energy for paying attention to content management. It’s the content that is important. The “files” are a machine organizing method but it’s not clear why users need to know about them: the user’s point of view is content. E.g., one is naturally tempted to duplicate files, but it is not possible to duplicate content. One might change content; the versioning will make this safe.

Footnotes and details to follow …

Alyer Babtu March 28, 2019 12:45 PM

Re above,

Thousand apologies, I should have acknowledged Jef Raskin “The Humane Interface” in regard to file versus content.

Coyne Tibbets March 28, 2019 9:03 PM

Let’s say I have a dark view of security these days, and I think that not clearing disk drives is least of our worries. Bottom line: if you don’t want it seen by somebody don’t put it on a computer. If you do put it on a computer assume someone other than you is seeing it. Probably many someones.

1&1~=Umm March 28, 2019 11:06 PM

@Coyne Tibbets:

“Let’s say I have a dark view of security these days…”

But is it dark enough?

I think it was @Nick P who said don’t use Intel hardware made after 2005. And others said use only hardware from ‘before the turn of the century’ but did not say which 😉

I guess the real question is not what hardware ‘they’ might have targeted / got into, but which hardware ‘they’ have not got around to yet.

It’s a fair bet that both Intel and AMD have been ‘got at’ and with ARM having Chinese Interests hanging around it via a Japanese company these days, there is the other ‘theys’ to consider.

Also with the runaway success of the Raspberry Pi foundation what have the secretive Broadcom been upto…

So how about MIPS 32 or 64?

This is a little over three years old,

There is a list of supported hardware for BSD 4.4 at,

With this board,

Being only 22Euros and easily made ready for a prototype board to add extra goodies on. From the blurb it should be able to support five other serial interfaces so serial terminals or SPI memory cards. Also 100Mhz ethernet, so connectivity should not be much of an issue.

Oh and there are other suitable PIC MIPS boards designed for Adruno shields to be added.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.