Security Vulnerability in Windows 8 Unified Extensible Firmware Interface (UEFI)
This is the first one discovered, I think.
This is the first one discovered, I think.
Ryan Ries • September 24, 2012 1:31 PM
So the article seems to imply that turning on SecureBoot can prevent this sort of attack.
But when we thought we wouldn’t be able to turn off SecureBoot at first, all the Linux people cried foul.
Just can’t win…
Nicolai • September 24, 2012 2:02 PM
As I understand the attack, then it basically turn UEFI (SafeBoot) off. (So it’s not really an attack against UEFI, more like a side-channel attack).
You could defend against it by forcing SafeBoot on, but then Win8 wouldn’t be able to run on a lot of old computers ( + a certified Win8 computer wouldn’t be able to run Linux).
However, because MS demand that a computer need to support UEFI to become “Win8 Certified”, then we might see a Win9 that requires SafeBoot (which makes this attack useless).
Mark Tinberg • September 24, 2012 2:15 PM
I think there is a misunderstanding here, this looks like the exact kind of bootkit that Secure Boot was designed to prevent and that it does prevent it. The bootkit attacks the kernel by disabling module signature checking as the firmware loads the kernel image off disk. Presumably later in the boot process is a kernel module which loads and hides the bootkit and all evidence of its activity.
This attack was previously demonstrated against Mac OS X as well. Presumably it would also work against any OS.
Win8 certified hardware is required to have Secure Boot enabled by default although it is also required to allow Secure Boot to be disabled and to enroll new keys. This is needed to be able to boot older OSs like Win7 on Win8 certified hardware.
Linux shouldn’t be affected at this time because you can disable or enroll keys on Win8 hardare and the major vendors are getting bootloaders signed by Microsoft so that they work out of the box without any fiddling.
Matthias Urlichs • September 24, 2012 2:32 PM
What Mark said.
It’d be a major feat if that attack worked on a computer that had Secure Boot turned on (and by “work” I mean “the user doesn’t realize anything is wrong, even after rebooting”). But unless the thing also manages to turn SB off (or they got their rogue bootloader signed my Microsoft), this is a load of hot air.
Nick P • September 24, 2012 2:33 PM
So, we didn’t trust UEFI for security or DRM-related issues. Now, it’s proven to be untrustworthy. The closest commercial offerings to the kind of trusted boot I advocate are Chromebook Verified Boot & uLoad.
Chromebook verified boot
I find it easier to get verification correct if most effort is put into a ROM-based, TCB loader & verifier. That way, if the rest turns out vulnerable, you can replace it with plenty of assurance without a recall. This would need careful consideration and wargaming of the various pieces of hardware on the system. Fail-safes like uLoad’s should exist to keep it from failing into an insecure state.
Additionally, I think there should be an easy (and CHEAP) way of replacing the public key used for verification. There are a few ways to do this. One is to put it in the writable flash, authenticated via TPM or firmware/microcode. Another is to get one’s own verifying bootloader signed by the manufacturer, then it loads & can load arbitrary user-signed programs. Any system where the manufacturer has sole control over what runs is a slippery slope.
phred14 • September 24, 2012 2:46 PM
My impression is that this IS something new. I have been under the impression that if Secure Boot is enabled and you boot Windows 8 or other properly done Secure Boot OS, your system will be secure. If you do not have Secure Boot enabled, at the very least Windows 8 will not boot.
Therefore one could argue that Windows 8 will ONLY boot securely. This article seems to contradict that.
Matthew Garrett • September 24, 2012 2:49 PM
It’s really not a vulnerability as such – a standard UEFI setup will allow you to run untrusted code before the OS starts, and without a TPM you’ve got no way of verifying the OS state after the fact. It’s equivalent to the MBR-based bootkits that already exist in the BIOS world, but the various UEFI entry points mean that the hooks can be a little more elegant. Secure Boot (assuming a bug-free implementation) would secure against this attack.
Mark Tinberg • September 24, 2012 2:56 PM
You might try reading some of the Secure Boot documentation written by Matt Garrett as it is the most succinct that I’ve seen
And of course Win8 will boot on non-UEFI and non-SecureBoot systems otherwise there’d be no way to install it on existing systems.
As far as putting effort into a firmware based loader and verifier, that seems to be what UEFI and Secure Boot is trying to achieve… You can’t modify keys after the system boots but you can blacklist keys and install updates as long as they are signed by a key that the firmware already trusts.
Nick P • September 24, 2012 2:57 PM
EDIT: One more for those looking for better trusted boot designs.
Freescale Secure Boot
I forgot to include this one. Like Chromebook. they combine ROM, writable storage & signatures. The part that makes them shine is how many extra precautions they take during the trusted boot process to prevent circumvention. Seems to be among the most trustworthy of the technologies so far. (Barring dedicated, security-related chips like Infineon’s.)
curtmack • September 24, 2012 3:34 PM
The Nintendo Wii uses a similar setup. To recall from memory:
The first firmware level, boot0, is built in hardware directly into the processor.
The next level, boot1, is stored in a special sector of the NAND flash, and the SHA-1 hash is burned onto unrewritable fuses in the processor itself; boot0 refuses to load boot1 if the hashes don’t match.
boot1 loads boot2, which Nintendo verifies by including an RSA signature of the SHA-1 hash. Then, finally, boot2 loads the system menu (which is also verified by a similar signature).
So to recap:
boot0 = literally unalterable for all Wiis ever
boot1 = NP-hard to alter once originally written
boot2 = NP-hard to alter without Nintendo’s private key
This structure actually came back to bite Nintendo in the ass because the first several production runs are forever marred with a major, security-destroying bug in boot1…
moo • September 24, 2012 10:10 PM
Yeah, Sony botched the crypto in the PS3’s secure boot system too. They used the same “random” number in two places which were each supposed to have their own random number. Result: those fail0verflow guys did some algebra and recovered Sony’s private signing key…
NobodySpecial • September 24, 2012 11:07 PM
” Secure Boot is enabled and you boot Windows 8 or other properly done Secure Boot OS, your system will be secure”
The trouble is that you buy a machine built by %cheapest supplier% with an OEM copy of Windows. Other than the hologram printed on the box how do you know that it’s secure?
Unless I order the secure BIOS chip direct form a trusted maker and solder it on myself, then install a retail DVD of Windows direct from Microsoft – then all these “security” features are like ordering drugs on the internet and relying on them coming in a sealed package!
John • September 25, 2012 3:32 AM
To summarise, secure systems are possible but difficult
phred14 • September 25, 2012 7:08 AM
On the non-Windows 8 side of things, I’m a Gentoo user. I build my own kernels, usually monthly or more often. Let’s say that I buy into UEFI, secure boot, and all of that.
How do I at least attempt to build my own secure kernels? At least Gentoo verifies checksums on what it downloads, so I’ve got some confidence that I have good kernel source. I’ll also need tools and keys to sign my kernel, etc. I can also believe that I will need to have downloaded at least once a kernel I can “trust”, and use that to start the trust chain of my own kernels, building the new kernel while running a “trusted” kernel, etc.
Can I do this online, or is it something where I have to unplug my network while building and signing the new kernel? Do I need the further step of keeping my signing infrastructure on a USB key that is never plugged in while I’m networked? If that’s the case, how do I set it up in the first place?
This seems to me to be a down-the-rabbit-hole line of thought.
curtmack • September 25, 2012 11:37 AM
@moo fail0verflow even contains many of the people who found the original Wii security bug (which, for the record, involved comparing SHA-1 hashes with strncmp).
Jeff • September 25, 2012 11:58 AM
@phred14: The UEFI approach to security is “don’t bother trying to prevent malware from rooting the OS – the user’s files are unimportant; just prevent any unauthorized modifications of the OS from persisting across reboots”. Under this model there can be no place for a user compiling his own kernel at all because the OS will usually be under the control of malware which could infect it.
RH • September 25, 2012 5:00 PM
@phred14: It seems to me that the instant you build a kernel on a machine, you trust that that machine is uncompromized (to the point of trusting the entire gcc toolchain). You would need a trusted image (possibly a standalone image used for nothing but compiling). Probably the best way to pull this off is a shared volume between your live install and a sanitized image with gcc.
At some point the DRM arguments need to separate from Secure booting, at which point security becomes a floating point rather than a binary secure/insecure. For me, I’m happy just to be told when an untrusted image is about to be loaded for the first time. If this happens at a time when I didn’t expect, then the computer is infected. If it gets infected by a system that’s smart enough to wait for a Windows Update to do its infection… well so be it. That’s my personal security/usability tradeoff point. I’m quite positive Oak Ridge National Labs will prefer a much more stringent requirement (such as 2 factor authentication of checksums distributed on paper through secure channels).
failure • September 25, 2012 5:29 PM
Microsoft puts out an insecure o/s where nothing is sandboxed and the browser takes over your entire machine. They have something like 3,000 engineers per project and not one of them can write secure code so the MBAs got together and decided hey let’s just pawn this off on the hardware manufacturers.
This isn’t going to work either and eventually they will just run giant server farms and sell you a dumb terminals to connect to their DRM riddled ‘cloud’ and everybody runs a remote desktop that checks for piracy or thoughtcrime every hour. Pay per use computing with a little paperclip that pops up telling you that possible ebook piracy has been detected, lawyers have been dispatched to your house.
curtmack • September 26, 2012 3:18 PM
I think I may have figured this out:
The build computer is a desktop computer located in a locked underground facility at an unmarked location. It is not connected to any network. It has no wireless devices, and even if it did, the room is a Faraday cage.
The build computer contains no storage except a small hard drive, which contains two partitions. It has no MBR and the system cannot boot by itself.
The first partition contains an extremely minimalistic installation of Linux. It is put together by hand and contains only the kernel (along with a small handful of needed modules), the usual required system commands, bash, PGP, sha1, and gcc.
It also contains a PGP signature file that signs the SHA-1 hash of every file on the drive save itself; init scripts verify that all files match and that no unexpected files are found. The second partition contains the same thing, except that every file is signed via a public-key MAC whose private key was destroyed. This allows recovery in case the first partition breaks or is tampered with, but allows the first partition to be updated if necessary.
As mentioned, the system cannot be booted by itself. It requires the boot key, a USB drive which contains the necessary boot code. The USB drive is a custom piece of hardare that contains fuses encoding the SHA-1 hash of the drive contents, and the USB drive will not start if the hash does not match. It uses a customized version of LILO that allows it to perform the restore process from partition 2, after verifying the MACs. The boot key is stored in a secure safe in the owner’s house. The boot key also contains the PGP private key for signing files on the first partition.
Finally, a second USB port is included. This is used for a USB drive that houses the files to be built. The drive is mounted, and the PGP signatures of the sources are checked. Once verified, the sources are built and the binaries are written back to the USB drive. Upgrading the Linux installation is similar, except that it writes to itself and updates the signature file, and then reboots.
Also, there are tigers.
curtmack • September 26, 2012 4:12 PM
Obviously this will be set up only after performing a mathematical proof of correctness of every line of source code that will run on the build computer.
Clive Robinson • September 26, 2012 8:41 PM
Also, there are tigers
There are no tigers if you place four special rocks (I can sell you 😉 around the entrance to your underground facility.
More seriously though how are you keeping other people from pluging in their own USB drives that act as boot keys?
The simple fact is that with the best will in the world you cannot make a system that is “known to be secure” only that you think “might” be secure.
Simply because historicaly we have the issue of “unknown unknowns” and we cannot see into the future…
Nick P • September 26, 2012 9:12 PM
@ RH and curtmack
You guys are trying. It’s better to focus on practical security rather than total. The designs you’ve mentioned that try to be extremely secure fall short in extreme ways. It’s an easy trap to run into if you don’t do this stuff much. The more practical ideas mentioned are decent, including separate build machine & extra security on it. If you’re using something like Linux or UEFI, all the formal methods and stuff don’t apply anyway because you’re guaranteeing something guaranteed to have residual, severe flaws. 😉
I can’t find my big layer by layer breakdown right now. However, even though I promote high assurance, I have discouraged efforts I thought would go nowhere. The comment below shows you what you’re up against trying to make a useful, secure system with maybe a little custom hardware.
So, what to do for a practical build system or signature system? Well, at least two systems is a start. Might use something like minimized OpenBSD for the critical system. You need to get copies of important things like the kernel/OS, GCC toolchain, etc. Needs to be able to zip up a release & sign it with protected private key. Also, need a process where developers’ changes are checked, integrated with the local repository, and the final software rebuilt.
No matter how you do it, the physical machine must be safe from anyone untrustworthy. The critical stuff should be encrypted & hashed just in case. It should NOT be connected to Internet. Transfer files to and from using a simple non-DMA link (home-made data diode, anyone?). Keep copies of everything downloaded from internet, with hash and signatures, on read-only memory (CDROMS, preferably, using diff PC). Optionally put the signing key in an encrypted volume with very strong password, only opened [into RAMdisk] during the signing phase itself.
Also, sometimes you can get free security benefits at high performance. An example is how one of you was going to check a bunch of files for modifications. The dates might lie, so your software would probably hash every file & compare hashes to trusted baseline copy. Alternative: make a trustworthy system image, save it to hard disk as a whole image, & hash that image. Next time, you just have to hash/check one big file, then load it. Simple, eh?
You might also wonder about subversion, OS issues, etc. Get several systems from different vendors under different names. Maybe even different processor architectures. Put different OS’s on them. Compile the release on all of them with the same settings and software. Trusted system checks that they match & signs the result. Main security requirement is trusting the generation software (e.g. compiler) & checker, which is less than trusting whole stack. Opponents have to compromise all kinds of stuff to slip one by.
Hope some of this helps.
Jason T. Miller • September 27, 2012 3:54 AM
Why trust any “home-made” removable storage device more than a $10 USB stick purchased at retail the morning of the build?
curtmack • September 27, 2012 10:31 AM
@RH: I was more interested in preventing tampering. Note that both the computer and the real, unaltered boot key have to be present at the same time to modify anything on the computer without being detected, unless you break SHA-1 or PGP. At the very least the attacker would need to steal the original boot key to get the PGP private key, and then he may as well just use that and use it to install Ubuntu: Malware Ahoy Edition directly.
@Nick: It was more intended to be a demonstration of why going heavily on the security side is ridiculous. Although it’d be a fun project to actually make that version of Linux.
Nick P • September 27, 2012 2:22 PM
Appreciate the clarification.
A data diode is a transfer device that enforces one-way transmission of data. They are simple enough that two or three were certified to EAL7, NSA’s highest standard. One manufacturer has a “oneway ethernet” design mainly involving modifying cables and driver. Very easy home project, but doesnt have to be ethernet.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment