The Benefits of Endpoint Encryption
An unofficial blog post from FTC chief technologist Ashkan Soltani on the virtues of strong end-user device controls.
An unofficial blog post from FTC chief technologist Ashkan Soltani on the virtues of strong end-user device controls.
Anura • August 28, 2015 2:57 PM
Maybe I’m out of the loop, but I always remember being able to defeat BIOS passwords by clearing the CMOS. That said, HDD encryption combined with a strong password is invaluable.
Dr. I. Needtob Athe • August 28, 2015 3:33 PM
“I backup regularly and always enable disk encryption which is an important step to protect the information stored on the hard-disk from unwanted access by criminals, employers, or other actors (with the exception of very sophisticated adversaries).”
Is he correct to presume that “very sophisticated adversaries” can overcome hard disk encryption?
parabararian • August 28, 2015 3:36 PM
Removing the CMOS battery for five minutes almost always clears the custom settings but some tweaker stealing my laptop is unlikely to know that.
Who? • August 28, 2015 4:08 PM
Removing the laptop/CMOS batteries does not help on well designed laptops (e.g. ThinkPads). Usually replacing a surface mount chip on the motherboard is required. Reflashing firmware using an external SPI programmer does not help either. CMOS settings and BIOS passwords are stored on different chips.
@Dr. I. Needtob Athe
FDE should be resistant against most attackers, but there are weaknesses that can be exploited. My computers usually have FDE drives (OpenBSD’s softraid, AES-256 in XTS mode), but the encryption key is protected by just a passphrase. If I were an attacker I would try to recover the certificate that protects the drive by brute forcing the passphrase itself.
Spaceman Spiff • August 28, 2015 4:20 PM
@Who? My grandson can defeat all of these in about 15 minutes. FWIW, he has his own wave solder system so he can remove and replace any SMT chips he needs.
Thoth • August 28, 2015 7:35 PM
Trusted boot or secure boot has gotten a bad name for itself due to UEFI and the stuff of vendor lock-ins like Microsoft and Apple. Neither setting a BIOS password nor encrypting the hard disk with FDE encryption secures the entire boot process fully and at best a half done job. Ironically, there are no known ways to do a secure or trusted boot with the user in control due to the proprietary nature of the chips, fimrware and all other parts of it.
If someone can edit the bootloader or lower level firmwares controlling the boot process and hand it over to the user, there is no need to bruteforce any passwords or certificates anytime. During the infected boot process, a user can key in his password and the low level malware can use side-channels to exflitrate the keys and passwords.
“Encryption itself has no agency …”, which our host, @Bruce Schneier always like to say.
This is very true when the very hardware and firmware you use can badly betray you deliberately or inevitably. Look at the amount of blackbox chips the GCHQ dudes demanded to be removed from The Guardian’s Macbooks.
jaime • August 28, 2015 9:58 PM
Ironically, there are no known ways to do a secure or trusted boot with the user in control due to the proprietary nature
Thoth, you may be confusing “trusted” with “trustworthy”. A trusted system is one you rely on to enforce your security policy, whether or not such trust is justified. “Restricted boot” would be a more accurate and neutral term than “secure” or “trusted”.
Matthew Garrett wrote about a way to improve the security of full-disk encryption to prevent bootloader replacement attacks. The TPM and another portable computer (e.g. smartphone) are the main trusted components.
Thoth • August 28, 2015 10:41 PM
What is the difference between trusted and trustworthy ?
Trusted is something you trust and once it fails, it breaks the system security as a whole. Similarly, trustworthy is something where you place trust and denotes the same meaning. Either way, both are referring to the same thing no matter how you look at it.
I think @RobertT, @Nick P, @Clive Rosbinson, @Wael, @Figureitout, @Markus Ottella (hope I got the name right) and myself have extensively discussed on such systems in many posts before. We have included points where security chips deployed in real life have not lived up to their expectations and theories let alone political meddling and human desires in play which caused these “trusted” or “secure” chips to not perform as they have to in our eyes.
The entire security of a system boils down to a Root of Trust (RoT) and for most computing system not designed for security, they have no RoT. Most security chips as we know (including TPMs and security integrated chips like Intel, MIPS and ARM) have some form of RoT but that’s in theory because you must trust Intel/MIPS?ARM certificates and designs inside their integrated chips while history and current politics and human meddlings have cautioned us that these kind of trust can be easily misplaced.
A few of us including myself have proposed absurd ideas like a trusted controller with tiny open transistors for inspection as a RoT unit and of course it’s absurd but our suspicions of “trusting someone’s trust” just isn’t working out the way we expected.
@Clive Robinson, @Wael and @Nick P have contributed with their security designs which I called it the “Castle-Prison-Dataflow” architecture which compromises the best of all worlds they have to offer but again, it is not commercially viable because of the current politics and human meddlings in the industry (and resources needed).
By the way, take a look at Qualcomm’s ARM TrustZone implementations (QSEE) and you will see a couple of rather fatal bugs already occuring over the past few years which was suppose to be the RoT of many “secure phones” and I would guess the Boeing Black designed for US DOD and agencies usage might also be ARM with TrustZone from their public documents and I would guess they might be using Qualcomm as it’s the single biggest mobile chipset provider with ARM TrustZone all rolled in one.
Unless one can solve the RoT problem, all these security systems have large gapping holes……
QuartzDragon • August 28, 2015 11:46 PM
Well… I am not sure that hard-drive encryption is even to be trusted, if the likes of the NSA are able to insert their malware into the hard-drive firmware… I have to wonder whether even SSDs are safe. If the NSA or Mossad really wants your data, they will get it, one way or another… :/
Maybe I am just being pessemistic?
Thoth • August 29, 2015 1:20 AM
Reality is, there are research experiments and papers in the open public sector that shows it isn’t all too hard to get whatever you want (as you mentioned about NSA or Mossad). Security is not as robust as we wishfully think because of the lack of RoT.
Maybe the FDE can prevent petty intrusions until someone does massive or single targeted insertions and exploits. If you are talking about just preventing that casual somebody, it might work against them but not against agencies.
Clive Robinson • August 29, 2015 7:34 AM
I have to wonder whether even SSDs are safe. If the NSA or Mossad really wants your data, they will get it, one way or another… :/
The reality is, as always complicated, as a general rule SSDs are less secure than magnetics because of “erase issues” at the storage level (look up “wear levelling” in Flash chips). At all other levels they are probably about the same level of risk.
So the next question is how do you mitigate the risk?
Well being able to get into the HD controler sounds like it gives you lots of power, but even the most powerfull of men find themselves powerless when locked in solitary confinement with their only knowledge of the outside world coming through a guard who can not be easily subverted.
So you place a guard between the HD and the computer. It’s job is two fold, the first is to only alow a certain minimum of the valid commands to pass to the drive, the second is to deliberatly realign cylinders and blocks of sectors, such that what gets written to where is unknown by either side of the guard. The minimum of valid commands helps eliminate attempts to get at the HDs firmware, the realignment helps reduce issues such as an already infected firmware sending back mal/spyware instead of valid OS / app / data files.
You also make the guard such that it also does encryption with keys, encryption or both “out of band” @Thoth will tell you more on how to make SIM and similar SmartCards into trusted “key-stores/encryptor” for this (the penalty however is access speed that may actually not be an issue).
In effect the guard is what the NSA etc call an “Inline Media Encryptor” or IME, the SIM/SmartCard the “Crypto Ignition Key”. However you add some “extras” in the guard, one of which is a low level port you can attach a terminal or other device to so you can perform low level forensic style analysis with it. You also add a “false boot” trick, whereby the guard appears to be a computer booting up from the drive after powerup thus getting around another issue with spyware that may already be installed in the drive firmware.
For obvious reasons this drive and guard are going to be “external” to the computer so there are physical advantages to making it a USB drive. Likewise there are advantages with not making it the primary boot drive. Thus booting of a “read only media” device such as a CD/DVD ROM or even a floppy is desirable.
As the guard is “inline” you can in effect use it “transparently” and add a second layer of encryption such as the dreded “bitlocker” or TrueCrypt or if not using MS (a good idea) what ever OS Driver level and app level encryption it uses.
One way to use MS more safely is as an image in a VM on a *nix platform.
Which brings us around to another issue “why boot MS OSs?” For some time MS has had various ways to fast boot by using a suspended image held in mutable memory such as the hard drive. I’m not sure of the details but there are ways to make/use these images in VM’s or in chain loaders. So it should be possible to store the images and a chain loader on a DVD etc.
There are still some dangers in booting off of such media but again you can mitigate them.
One mitigation is to inspect the memory in the computer and check it’s what it should be. There are a number of ways to do this, the first is to halt the CPU and then connected to the computer buses use a hardware card to examine the memory. This works with older CPUs such as the 486 on ISA bus but not when you have “Intel’s Bridges” to contend with. The second way is to use a second CPU or DMA device connected to the system buses. A third which I’ve not tried is to use the JTAG system, it may also be feasible to “image boot” Intel systems via JTAG, it certainly is for other families of CPU as it’s the way some development systems work.
So for those with the desire, knowledge and other resources there are ways to beat even State Level attackers when it comes to “information attacks” forcing them into “physical attacks” which demand very high resource input by them and very significant risk (which is why the likes of the Israeli and Russian ICs have in the past simply gone for the lower risk assassination option).
I won’t detail them here but there are even ways to prevent the “$5 Wrench” or “thermorectal” data extraction methods working, because you can only tell what you remember or provide access to…
At the end of the day the measures you take are dependent on a whole raft of things, many require specialised knowledge, and of course resources. But it also requires what few humans appear willing to do, which is “control themselves” part of which is OpSec and for most mortals they don’t practice it when they should. Technology can only go so far, the rest is how strong your personal will is.
As we have seen recently with IS some people are willing to stand up to them even though they end up dying. The sort of people who have that will and determination who also practice good OpSec and Trade Craft will beat State Level opponents every time. And it’s knowing they are impotent in this respect that makes States tourture and murder not out of any real need, but self destructive rage at being thwarted and their vaunted manhood proved worthless. If you think how little if anything “waterboarding” and “Gitmo” and the preceading “rendition” has gained the US or it’s alies and compare that to the loss of face, prestige and credibility even amongst their own citizens you can see why no matter what the short term gains, it can never make up for the long term losses.
It’s a lesson the US still has not learnt as evidenced by their behaviour towards Maning and Snowden, if anything it can be shown it’s had entirely the opposite effect in that there are now more whistleblowers than before…
G.Scott H. • August 29, 2015 7:36 AM
The non-volatile memory component used to store the firmware password is usually a commonly available general purpose design. Those will have a reset pin. Most I have seen clear the password when the reset pin is grounded on power-up. If you have already identified which chip to remove, then resetting is usually going to be easier than re-soldering a new one in its place.
We here seem to have concluded that sophisticated attackers (as opposed to the common thief) can overcome end-point encryption. Many of the organizations we associate with being sophisticated attackers are the ones pushing against ubiquitous end-point encryption. Odd. Maybe we are giving more credit for overcoming end-point encryption than we should?
Thoth • August 29, 2015 9:05 AM
“Many of the organizations we associate with being sophisticated attackers are the ones pushing against ubiquitous end-point encryption. Odd. ”
We have to consider another option of these sophisticated attacker model. You left out the option of subverting or poisoning standards like the DUAL_EC_DBRG. Interestingly, NIST was poisoned into pushing the dreaded RNG and also if you look at the Suit B algorithms of the ECC curves, according to Daniel J Bernstein (DJB), those ECC curves are dubious in nature and their values, algorithms and points are questionable.
Interestingly, NIST and NSA have been pushing for the adoption of these curves … The only open curves like DJB’s Curve25519, Ed25519 and such are not even considered in the standards but the open crypto community uses the DJB curve3s extensively due to it’s open designs.
I don’t think we over-credited circumventing end-point encryption. Most of us underestimated how badly protected we are.
Thoth • August 29, 2015 9:25 AM
@Clive Robinson, QuartzDragon, Nick P
“You also make the guard such that it also does encryption with keys, encryption or both “out of band” @Thoth will tell you more on how to make SIM and similar SmartCards into trusted “key-stores/encryptor””
Probably the Castle-Prison-Dataflow model that @Clive Robinson and others have been toying around might help a little.
You can simply use the smartcards (SIMs are smartcards but without too much of plastic). I would say use 2 smartcards (or SIMs). Logic is one is used as an RNG with non-strict key export so that you can generate a key and know what key you have gotten. The other part is you load this generated key from one smartcard to another which is acting as a keystore either via direct communications between smartcards (not advisable) or via a central host chip.
The mediating chip would probably be a normal General Purpose chip as it is expected to not be seen as a threat by the agencies and would probably be left alone. Most agencies would go for the crypto chips which in my view is a high priority threat that must be segregated and confined (prison model). Thew mediator chip would route packets around and inspect the packets.
The mediating chip would be assisted by it’s own cryptography enabled chip (ARM ? MIPS ? Freescale ?) to do secure channel protocols with the smartcards (encryptor and RNG cards).
This scheme isn’t very complete as I simply thought of it as I typed which some of you can help to cover it’s huge gapping loopholes. Multiple low powered mediator chips can be used for oblivious decision and random access protocols to confuse attackers and detect betraying chips. The problem is how to tamper resist the mediator chips since they are expected to be general-purpose chips without crypto (therefore no active and passive tamper resistant measures available in most crypto chips). The security of the ROM instructions in the general purpose mediators can be at risk too.
Nick P • August 29, 2015 12:02 PM
“What is the difference between trusted and trustworthy ?”
For me, there’s a difference. I use the term trusted to mean exactly what you say: “a component privileged enough to violate the security policy.” I use trustworthy to mean that there’s sufficient evidence that the component will do its job in a given situation. So, the trusted components should be trustworthy. Might or might not be how other people use the terms.
Of course, once you’re using either, they are equivalent in that you are trusting it to get the job done.
@ QuartzDragon, etc on FDE
It actually depends on the threat model first and then how FDE is implemented. Here’s a few threat models:
So, you really want to keep the TCB out of enemies’ hands while storage medium itself is throw-away. This is why cheap computers with Truecrypt are preferred by many with a concern for these threat models. Eliminates most of the issues. Far as strong HD security, the NSA’s Inline Media Encryptor approach is still the best as it’s OS neutral, hardware-implementation-neutral, and can incorporate the best security you can throw at it. My approach was to clone that with Truecrypt-like encryption and/or Truecrypt at software level. Clive went further by suggesting combining software, some kind of IME, and self-encrypting drives. I further that with my generic recommendation that each component is from a different, competiting country.
In any case, the current level of physical attack is too high to trust threat model 2. I mean, there’s certainly all kinds of ways to slow enemies down to the point that they might not do much during a police stop, customs check, or bathroom trip where you left the laptop. However, we must assume they’ll eventually figure out the workings of the hardware and probably accelerate an attack. Unlike software, you can’t change hardware enough to prevent this given NRE costs. So, just use a strong method, counter known risks (firmware to software), do tamper-evidence, and don’t let enemy get ahold of it outside your sight. And rest easy knowing 99.999% of attackers are using software or stealthy methods that this can stop. 🙂
Note: A potential solution to the above is a verified FPGA built into the SOC to allow diversification on a per-customer basis. The Archipelago open-source FPGA might be used. Experienced hardware people can also build all kinds of digital or analog tamper circuits with their own power sources. Very high risk of false positive destroying your stuff. Meanwhile, my recommendation stands with most resistance being choice of hardware/software or expensive hardware.
Clive Robinson • August 29, 2015 1:01 PM
@ Nick P,
Of course in the UK trust/trustworthy is out the window at all levels.With the current Home Sec –known affectionately as “psycobitch” by her closest advisors– has decided to even make Code Talking illegal,
Thoth • August 29, 2015 7:12 PM
Re: Accent is encryption
Not sure if you have ever talked to a Singaporean or South East Asian (during your visits) that mixes Asian languages with poor broken English (like me :)). I guess that kind brings “Accent Encryption” to another level by dumping in multiple language types to English and forces English-only speakers to a tail spin for a while.
“So, the trusted components should be trustworthy. Might or might not be how other people use the terms.”
It’s almost the same thing I guess. Trust is a difficult thing.
“And rest easy knowing 99.999% of attackers are using software or stealthy methods that this can stop”
“A potential solution to the above is a verified FPGA built into the SOC to allow diversification on a per-customer basis”
Just using a single type of FPGA (even if it’s open or not) is just too risky. Prison model needed thus my suggestion of using a couple more components here and there just to segregate things out. Using the IME method to avoid software or host system attacks from the somewhat host computer until the attacker is forced to decap the IME chips is actually a huge pain in the bottoms effort.
Decapping the IC chips wouldn’t be too difficult but the act of decapping the IC chips (not just 1 chip but a few more) to figure out the internals would mean they have to do highly specific per target basis of attack since you can’t do a dragnet process to target all the IMEs. Furthermore if the chips are properly designed to be tamper resistant to some degree, the decapping would be either a hit or miss process (if they have not tried their hands of multiple experimental chips before they got their target’s chip to execute) and you don’t always have success decapping every IC chip as it’s a somewhat manual thing to do and mistakes are made very often.
That brings be back where I forget to mention my design should always be battery backed amnd have it’s own internal clock without which the design is incomplete 🙂 . ORAM techniques of handling oblivious computations over multiple chip scenarios so that a single chip wouldn’t spill the entire guts out would also be rather useful. Forces the enemies to decap more diversified chips not just because of the different manufacturers to prevent collusions amongst the chips but also to have a diversified schematics and layout for security designs of different chips. That means the adversary must have perfect knowledge of all the security traps of all the chips in the IME so as not to trip the hardware and software security tripwires in the IC chips.
IME might be valid if it goes the additional mile of including a display screen for additional message checking and secure entry. The secret keys can be split among the chips in the M/N method and one of the 1/N can be a user PIN or multiple user PINs (like how the Thales HSM operates to some degree).
If the missing key is formed from an M/N quorum, it will be very frustrating to find enough quorum and hunt for the right people to waterboard or use enhanced interrogation techniques on them to get them talking.
anon-ish • August 29, 2015 9:04 PM
Pretty constant group of commenters here over the years. You folks are something else! Mostly not shills, mostly just folks saying pretty wacky stuff and meaning no harm. Thanks for all of it! You folks are all right! Work on the proper implementation of the one time pad, though, and stop whining about key distribution, and stop promoting stuff that depends on future people not making advances in mathematics, ok?
cinical • August 30, 2015 12:25 AM
Never met a shill in my life. The shills is an antagonist to another shills. As all things are relative, shilling is in perpetuity circular argument.
The nature of shilling cancels each other out. Thus shilling is no shills.
Bystander • August 30, 2015 3:08 PM
@Who? My grandson can defeat all of these in about 15 minutes. FWIW, he has his own wave solder system so he can remove and replace any SMT chips he needs.
Any SMT IC? Ever tried to wave solder a BGA? Not that anyone would try that – seriously.
He would need a few more tools as BGAs are much more popular than some might expect…
Total control over the endpoint means that the hardware of the device defining the endpoint is completely under your control.
Fascist Nation • August 30, 2015 4:19 PM
It is not really a BIOS password–which can be disabled by removing a battery to reset to factory defaults of none–but an eeprom [nonvolatile chip] storage password (aka a Supervisor or administrative password). These are very hard to tamper with or hack often even by the computer maker.
They can be linked to also trigger a hard drive password capability built into every ATA drive since they were around 20GB in size max.
This triggering the requirement for a password can happen when a hard drive is pulled even with the power off or some other key change in the computer issuing a challenge to the user to enter the password to resume proper computer function. The downside of the hard drive password–if different from the supervisor password–is remembering it since it does not challenge a user unless it has been removed or some other change in its environment has occurred.
There are some fairly ineffective ways to try and attack it and since these are old, I suspect others since I last looked:
— Found on The UBCD. http://www.ultimatebootcd.com/
Bystander • August 30, 2015 4:47 PM
Correction of my above post:
Total control over the endpoint means that the hardware and software/firmware of the device defining the endpoint is completely under your control.
I like the approach made by Thoth as it is a way to make attacks individual and costly.
Thoth • August 30, 2015 7:19 PM
Just like any M/N quorum based defenses, you spread the risk to a bigger surface area rather than a single chip storing the secret (password or key).
It is one thing to detect your hard drive not being able to boot, it’s another thing to circumvent the “Pls check missing HDD” or “Incorrect HDD” and start reading the bytes although who knows what kind of encoding the fimrware/chip might use but that’s obscurity instead of proper security unless proven that it is crypto properly baked in (very rarely done properly).
Better to split the risk up.
Thoth • August 30, 2015 7:46 PM
What I meant by integrating a keypad and screen into an IME as shown in the linked picture. But this is kind of a half done job (with keypad plug receptacle only).
Thoth • August 30, 2015 7:48 PM
This one has the PIN entry embedded but lacks a screen. How big would a screen be…
I love you guys, I always learn something new.
@Clive, thanks for the gobbledygook link. If that’s true the GCHQ is losing touch, I saw something on PBS earlier this year talking to the British Irish prisoners of the 70s and their involvement in the resurrection of their native tounge during the early years of the IRA.
IF that link is serious, it also means the brits have lost any sense of humor.
I wonder what languages give them trouble, certainly that covers specifically thieve’s cants/cryptolects but what pigeon dialects ruffle their feathers?
This is stuff the British have been dealing with for 200~ years, maybe they’re mad the five lies only licensed English, French and Russian from Nuance and with all the new toys they’re suffering from data overload.
Bystander • August 31, 2015 2:00 PM
Sure – I saw the security by obscurity angle.
Designing equipment in a tamper-proof way and reducing possibilities of attack through side-channels requires attention for the detail(s). A lot of attention is required for the enclosure.
The power source included should carry enough energy to serve for (partial) self-destruction of the device in case of tampering.
Andrew Cooper • September 1, 2015 2:21 AM
I back up all my data online for possible loss or for any reasons. It happen to me 3 years ago that all my data got crushed. Very thankful for this kind of post The Benefits of Endpoint Encryption that can save a lot of files.
Wael • September 1, 2015 11:04 PM
their security designs which I called it the “Castle-Prison-Dataflow“
Then you have seen only a small slice of the picture. This design is more about “control” organization — it’s not data centric, strange as it may sound.
You talk about “called” in the past tense! When did you ever call it that?
Mike • September 2, 2015 9:54 AM
@”r” — yes, the newsthump is like theonion.com. But with added Brit irony.
Fantastic laugh though, @Clive. Like all good humour, it’s so close to the truth that it’s almost not funny any more. But less of the “psycobitch”, please: Let’s be unswervingly polite to her all the way to the international crimes court.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment