Comments

Michael Kohne May 19, 2011 8:16 AM

Some of my older computers had a really good BIOS security mechanism – it was called a ‘jumper’. Are BIOS updates that frequent that we really NEED them to be always available? I know we try to make computers so that the average Joe can do stuff, but is this something that the average Joe needs to do?

BF Skinner May 19, 2011 8:21 AM

It’s surprising how much old tech is still in production. There’s a couple of reasons for this.

Silicon doesn’t wear out and new code is expensive.

USG agencies (who this is really written for) work on a budget horizon of 3-5 years for replacements for client workstations (laptops/desktops). Given recent voter angst that’ll probably telescope out a bit.

No matter how loudly people yammer about waste; the USG doesn’t trash multi-million dollar investments casually. For example, there are old mainframes still in operation.

Even hypervisors still run on hardware.

So this guideline is going to be in play for quite a while.

Fred P May 19, 2011 8:51 AM

@Michael Kohne-

Occasionally, yes. Or someone more sophisticated wants to update hundreds (or thousands) of BIOSes without taking a long time.

X May 19, 2011 9:50 AM

Actually, silicon does wear out. What is it called? Electron migration? Or some such thing??? But CPUs do fail after time. It takes like 8-13 years, but it does happen. Higher operating temperatures and electric shocks will speed up this process.

Now power supplies and harddrives, those fail far more frequently!

Carl 'SAI' Mitchell May 19, 2011 9:50 AM

BIOS can be attacked, and it’s not the only flashable firmware in a computer. DVD drives normally have an eeprom as well, and some CPUs have flash memory to store microcode patches.
Anything that can be updated can be exploited, though not having updates can allow other exploits.

Clive Robinson May 19, 2011 10:21 AM

I’m not impressed with the title (of the article not the PDF) of,

“Build Safety Into the Very Beginning of the Computer System”

Is not what mucking about with what the BIOS does or does not.

The “Very Beginning” starts with the hardware logic gates and progresses through the various state machines up to the CPU “Micro Code” that is also alterable by knowledgeable users in some well known CPU’s.

That aside there are a whole load of other security issues that no matter what you do to the various bits of firmware you will haemorrhage information (hardware side channels).

Then there is the issue of “efficiency” as a rough rule of thumb any system that is efficient in usage unless designed by an appropriate expert (and no they don’t work for the commodity OS companies) it will likewise haemorrhage information via the scheduler, pager, loader and other “efficient” systems.

And all that is before we start talking about “suceptability” to various directed energy attacks at the hardware and all those bits attached to it.

All that being said it is certainly a step in the right direction, and all things considered for the dollar expenditure this is likely to provide high reward returns for many with in reality little problem.

Oh hands up those who have 486DX or earlier CPU’s still running 8)

zorro May 19, 2011 10:49 AM

@Carl ‘SAI’ Mitchell
“DVD drives normally have an eeprom as well, and some CPUs have flash memory to store microcode patches.
Anything that can be updated can be exploited, though not having updates can allow other exploits.”

wonder if these can be updated through code on a USB stick?

RobertT May 19, 2011 11:41 AM

@Clive
OT: No 486’s but I do have DEC Alpha that runs, and I had my own PDP-9, it was still running a couple of years ago.

dbCooper May 19, 2011 1:29 PM

Been a couple years since I have fired either of them up, but in my basement and believed to be in working order:

IBM Model 5160 – PC XT with an 8088.
Apple II Plus with an 6502.

frank May 19, 2011 2:24 PM

Isn’t one of the practical upshots of this going to be further precluding legitimate users from modifying their own equipment? For example, if someone wants to install coreboot (or, to extrapolate, flash a DVD drive to be region-free), that’s probably “unauthorized modification,” surely? If I’m reading this right, implementing this will cause considerable collateral damage, and it might lead to yet another arms race against people who just want to own their own stuff.

Nick P May 19, 2011 3:41 PM

@ Clive Robinson

Yeah, the problems in all my desktop designs start with x86 architecture itself. We have SMM mode, real mode, protected mode, an unusual ring security implementation, segments (often unused in many OS’s), DMA, ultra-high-capacity cache-based side channels, bloated firmware, and errata in security-critical processor components. IF we can make a mostly secure x86 kernel, we’d still have to rebuild it for each choice of chip, firmware and hardware to counter the security issues they impose.

Earlier I was re-reading Paul Karger’s Retrospective on the VAX Security Kernel (free online via Google). He goes into detail describing the design elements and the processes that were required for the system to reach what was considered high assurance at the time. One of the first things I noticed was that they incorporated microcode changes into the design regularly, thanks to how flexible the VAX processor’s microcode was. They used these microcode changes to make an unvirtualizable processor into a decent TCB for their hypervisor.

A retrospective on the VAX Security Kernel
http://cs.dartmouth.edu/~cs38/resources/local/vmm.pdf

I think Gemini Computers did the same thing with x86 when they designed their A1-class GEMSOS OS. I don’t know if I told you Clive but I finally got around to contacting Aesec, current owners of GEMSOS. They quickly responded with information about current and expected future capabilities. From what I gather, the original x86 GEMSOS platform must have had custom firmware because they are currently trying to work out a deal with Intel to do this on one of their new platforms. This is also necessary because their OS’s security is tightly integrated with x86 features like segments.

You know, I don’t see us getting a secure x86 or whatever anytime soon. I think someone wanting that needs to roll their own hardware or build a custom CPU like the Magic-1 machine. There are certain cost and performance considerations, of course, but Magic-1 hosted a web site and runs tons of applications.

Magic-1 homebrew CPU
http://www.homebrewcpu.com/

I honestly keep thinking I should just try to get the software from one of those old B3 or A1 OS builds, get/duplicate the old hardware, and bite the bullet of loosing some features. For instance, I might just get a copy of the most recent VAX Security Kernel, buy a VAX 8800 processor system, load it with memory, and port some applications to it. A 22Mhz processor didn’t sound like much until I read Karger’s description of what they were doing with it:

“The VAX Security Kernel can support large numbers of simultaneous users. Once… operational, all software development… was carried out on several VM’s running on the VMM on a VAX 8800 system. On a typical day, about 40 software engineers and managers were logged in running a mixed load of text editing, compilation, system building, and document formatting. The system provided adequate interactive response time and is sufficiently reliable to support an engineering group that must meet strict milestones and schedules.”

And today’s IT groups somehow have trouble running 40 application VM’s on a 4-way with 32 2+GHz cores and 32GB of RAM. Maybe we should just build a faster VAX and port the old kernel to the new hardware. Maybe use some of the more security-centric FPGA’s and OpenCore designs to speed it up. I’d bet money that it would cost us less and happen faster than Intel making their hardware secure. Hopefully, Intel will at least use their newly acquired Wind River technology, like the separation kernel or RTOS, to improve the security of their platform. There’s been some speculation that they might do this but I don’t think we should hold our breath.

MailDeadDrop May 19, 2011 5:17 PM

@X: The phenomena is called “electromigration” and it refers to the movement of conductor atoms (metals) in solid traces, which will over time convert a nano-scale imperfection into an open circuit. Usually it is a failure mechanism only in poorly designed high current pathways.

MDD

pdn May 19, 2011 6:51 PM

One interesting thing to see is the tension between IT, government, users, and developers/power users/hobbyists. Each group wants something different from their computer, and if they don’t have it, the computer is “broken”.

Dirk Praet May 19, 2011 7:26 PM

@ frank

“Isn’t one of the practical upshots of this going to be further precluding legitimate users from modifying their own equipment?”

If my understanding is correct, then that’s not what the document says. Secure local update remains a fully legitimate method.

That said, I hardly ever update my BIOS unless I hit a known bug or or am alerted to a potential or newly discovered vulnerability/exploit. That’s what RSS-feeds and Twitter are useful for.

Anon Y. Mouse May 19, 2011 7:34 PM

I make daily use of a 486 to get e-mail, read this blog and others, and look at some
select web pages. I used to be able to use it to post comments here, but changes
made a couple of years ago prevent that now. To post this I had to fire up a 500MHz P3.

Robbo the Wonder Spaniel May 19, 2011 9:49 PM

@Nick P: they did port VMS (VAX) to new hardware, first AXP (Alpha) then Intel … sadly they picked the wrong Intel chip, and now HP has booted Hurd over to Oracle for banging the porn acrtress, Oracle are doing their level best to kill it all for good.

Nick P May 19, 2011 10:47 PM

@ Robbo the Wonder Spaniel

“they did port VMS (VAX) to new hardware”

Nah, they ported the low assurance version using low assurance techniques. The software I’m talking about is the VAX Security Kernel, a hypervisor/VMM, they designed to meet Orange Book A1 security level. It was a separate product from VMS and required custom VAX microcode. Steve Lipner cancelled the project when he determined that the rigorous development processes slowed time to market too much. DEC also had a bunch of customers in allied countries that couldn’t buy it because US government stubbornly classified B3/A1 secure systems as “munitions,” subject to export controls.

Whatever they ported was probably the regular OS, security kernel, etc. That software stack was certified to C2/B1/EAL4: low assurance. It was definitely not the A1/EAL7 VM. Besides, even if it was, they used a low assurance process for the porting. This would invalidate the assurance of the security kernel and probably the accuracy of a subset of its specifications. I want the original product with specs, documentation, tests, source code, and microcode. I’ll also take these things for Secure Computing’s LOCK platform, the Army Secure OS (ASOS), BLACKER, the Secure Ada Target, DTMach, DTOS, and SCOMP while where at it. I could probably squeeze some good use out of these old designs.

Robbo the Wonder Spaniel May 21, 2011 5:37 AM

@Nick P: point taken, I glossed. Sad but true about the low assurance path, now either all the VMS devs have died or been replaced by people in Bungleore it’s even more so.

@many others: I have an abacus. Pissing contest over?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.