not only has this horse bolted, but it's glue ...
I did have to quickly check the calender there....
It's too late for april fools, and certainly too late for 1980.
Some of my older computers had a really good BIOS security mechanism - it was called a 'jumper'. Are BIOS updates that frequent that we really NEED them to be always available? I know we try to make computers so that the average Joe can do stuff, but is this something that the average Joe needs to do?
It's surprising how much old tech is still in production. There's a couple of reasons for this.
Silicon doesn't wear out and new code is expensive.
USG agencies (who this is really written for) work on a budget horizon of 3-5 years for replacements for client workstations (laptops/desktops). Given recent voter angst that'll probably telescope out a bit.
No matter how loudly people yammer about waste; the USG doesn't trash multi-million dollar investments casually. For example, there are old mainframes still in operation.
Even hypervisors still run on hardware.
So this guideline is going to be in play for quite a while.
Occasionally, yes. Or someone more sophisticated wants to update hundreds (or thousands) of BIOSes without taking a long time.
Actually, silicon does wear out. What is it called? Electron migration? Or some such thing??? But CPUs do fail after time. It takes like 8-13 years, but it does happen. Higher operating temperatures and electric shocks will speed up this process.
Now power supplies and harddrives, those fail far more frequently!
BIOS can be attacked, and it's not the only flashable firmware in a computer. DVD drives normally have an eeprom as well, and some CPUs have flash memory to store microcode patches.
Anything that can be updated can be exploited, though not having updates can allow other exploits.
I'm not impressed with the title (of the article not the PDF) of,
"Build Safety Into the Very Beginning of the Computer System"
Is not what mucking about with what the BIOS does or does not.
The "Very Beginning" starts with the hardware logic gates and progresses through the various state machines up to the CPU "Micro Code" that is also alterable by knowledgeable users in some well known CPU's.
That aside there are a whole load of other security issues that no matter what you do to the various bits of firmware you will haemorrhage information (hardware side channels).
Then there is the issue of "efficiency" as a rough rule of thumb any system that is efficient in usage unless designed by an appropriate expert (and no they don't work for the commodity OS companies) it will likewise haemorrhage information via the scheduler, pager, loader and other "efficient" systems.
And all that is before we start talking about "suceptability" to various directed energy attacks at the hardware and all those bits attached to it.
All that being said it is certainly a step in the right direction, and all things considered for the dollar expenditure this is likely to provide high reward returns for many with in reality little problem.
Oh hands up those who have 486DX or earlier CPU's still running 8)
@Carl 'SAI' Mitchell
"DVD drives normally have an eeprom as well, and some CPUs have flash memory to store microcode patches.
Anything that can be updated can be exploited, though not having updates can allow other exploits."
wonder if these can be updated through code on a USB stick?
OT: No 486's but I do have DEC Alpha that runs, and I had my own PDP-9, it was still running a couple of years ago.
No 486s, again, but I've got an Apple IIc and IIgs, both still ticking.
Been a couple years since I have fired either of them up, but in my basement and believed to be in working order:
IBM Model 5160 - PC XT with an 8088.
Apple II Plus with an 6502.
Isn't one of the practical upshots of this going to be further precluding legitimate users from modifying their own equipment? For example, if someone wants to install coreboot (or, to extrapolate, flash a DVD drive to be region-free), that's probably "unauthorized modification," surely? If I'm reading this right, implementing this will cause considerable collateral damage, and it might lead to yet another arms race against people who just want to own their own stuff.
@ Clive Robinson
Yeah, the problems in all my desktop designs start with x86 architecture itself. We have SMM mode, real mode, protected mode, an unusual ring security implementation, segments (often unused in many OS's), DMA, ultra-high-capacity cache-based side channels, bloated firmware, and errata in security-critical processor components. *IF* we can make a mostly secure x86 kernel, we'd still have to rebuild it for each choice of chip, firmware and hardware to counter the security issues they impose.
Earlier I was re-reading Paul Karger's Retrospective on the VAX Security Kernel (free online via Google). He goes into detail describing the design elements and the processes that were required for the system to reach what was considered high assurance at the time. One of the first things I noticed was that they incorporated microcode changes into the design regularly, thanks to how flexible the VAX processor's microcode was. They used these microcode changes to make an unvirtualizable processor into a decent TCB for their hypervisor.
A retrospective on the VAX Security Kernel
I think Gemini Computers did the same thing with x86 when they designed their A1-class GEMSOS OS. I don't know if I told you Clive but I finally got around to contacting Aesec, current owners of GEMSOS. They quickly responded with information about current and expected future capabilities. From what I gather, the original x86 GEMSOS platform must have had custom firmware because they are currently trying to work out a deal with Intel to do this on one of their new platforms. This is also necessary because their OS's security is tightly integrated with x86 features like segments.
You know, I don't see us getting a secure x86 or whatever anytime soon. I think someone wanting that needs to roll their own hardware or build a custom CPU like the Magic-1 machine. There are certain cost and performance considerations, of course, but Magic-1 hosted a web site and runs tons of applications.
Magic-1 homebrew CPU
I honestly keep thinking I should just try to get the software from one of those old B3 or A1 OS builds, get/duplicate the old hardware, and bite the bullet of loosing some features. For instance, I might just get a copy of the most recent VAX Security Kernel, buy a VAX 8800 processor system, load it with memory, and port some applications to it. A 22Mhz processor didn't sound like much until I read Karger's description of what they were doing with it:
"The VAX Security Kernel can support large numbers of simultaneous users. Once... operational, all software development... was carried out on several VM's running on the VMM on a VAX 8800 system. On a typical day, about 40 software engineers and managers were logged in running a mixed load of text editing, compilation, system building, and document formatting. The system provided adequate interactive response time and is sufficiently reliable to support an engineering group that must meet strict milestones and schedules."
And today's IT groups somehow have trouble running 40 application VM's on a 4-way with 32 2+GHz cores and 32GB of RAM. Maybe we should just build a faster VAX and port the old kernel to the new hardware. Maybe use some of the more security-centric FPGA's and OpenCore designs to speed it up. I'd bet money that it would cost us less and happen faster than Intel making their hardware secure. Hopefully, Intel will at least use their newly acquired Wind River technology, like the separation kernel or RTOS, to improve the security of their platform. There's been some speculation that they might do this but I don't think we should hold our breath.
@X: The phenomena is called "electromigration" and it refers to the movement of conductor atoms (metals) in solid traces, which will over time convert a nano-scale imperfection into an open circuit. Usually it is a failure mechanism only in poorly designed high current pathways.
worth a look:
One interesting thing to see is the tension between IT, government, users, and developers/power users/hobbyists. Each group wants something different from their computer, and if they don't have it, the computer is "broken".
"Isn't one of the practical upshots of this going to be further precluding legitimate users from modifying their own equipment?"
If my understanding is correct, then that's not what the document says. Secure local update remains a fully legitimate method.
That said, I hardly ever update my BIOS unless I hit a known bug or or am alerted to a potential or newly discovered vulnerability/exploit. That's what RSS-feeds and Twitter are useful for.
I make daily use of a 486 to get e-mail, read this blog and others, and look at some
select web pages. I used to be able to use it to post comments here, but changes
made a couple of years ago prevent that now. To post this I had to fire up a 500MHz P3.
@Nick P: they did port VMS (VAX) to new hardware, first AXP (Alpha) then Intel ... sadly they picked the wrong Intel chip, and now HP has booted Hurd over to Oracle for banging the porn acrtress, Oracle are doing their level best to kill it all for good.
@ Robbo the Wonder Spaniel
"they did port VMS (VAX) to new hardware"
Nah, they ported the low assurance version using low assurance techniques. The software I'm talking about is the VAX Security Kernel, a hypervisor/VMM, they designed to meet Orange Book A1 security level. It was a separate product from VMS and required custom VAX microcode. Steve Lipner cancelled the project when he determined that the rigorous development processes slowed time to market too much. DEC also had a bunch of customers in allied countries that couldn't buy it because US government stubbornly classified B3/A1 secure systems as "munitions," subject to export controls.
Whatever they ported was probably the regular OS, security kernel, etc. That software stack was certified to C2/B1/EAL4: low assurance. It was definitely not the A1/EAL7 VM. Besides, even if it was, they used a low assurance process for the porting. This would invalidate the assurance of the security kernel and probably the accuracy of a subset of its specifications. I want the *original* product with specs, documentation, tests, source code, and microcode. I'll also take these things for Secure Computing's LOCK platform, the Army Secure OS (ASOS), BLACKER, the Secure Ada Target, DTMach, DTOS, and SCOMP while where at it. I could probably squeeze some good use out of these old designs.
@Nick P: point taken, I glossed. Sad but true about the low assurance path, now either all the VMS devs have died or been replaced by people in Bungleore it's even more so.
@many others: I have an abacus. Pissing contest over?
"@many others: I have an abacus. Pissing contest over?"
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.