DARPA Research into Clean-Slate Network Security Redesign

This looks like a good research direction:

Is it possible that given a clean slate and likely millions of dollars, engineers could come up with the ultimate in secure network technology? The scientists at the Defense Advanced Research Projects Agency (DARPA) think so and this week announced the Clean Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program that looks to lean heavily on human biology to develop super-smart, highly adaptive, supremely secure networks.

For example, the CRASH program looks to translate human immune system strategies into computational terms.  In the human immune system multiple independent mechanisms constantly monitor the body for pathogens. Even at the cellular level, multiple redundant mechanisms monitor and repair the structure of the DNA. These mechanisms consume tons of resources, but let the body continue functioning and to repair the damage caused by malfunctions and infectious agents, DARPA stated.

Posted on June 9, 2010 at 12:59 PM53 Comments

Comments

nick June 9, 2010 1:17 PM

Humans die all the time. I wouldn’t want to model an architecture after them!

A clean-slate redesign could be useful, though. Just require encryption for all connections, and require standardized multifactor authentication for all network services. Make all parties (user, computer, and server) digitally sign every bit of data transmitted.

It could be done. Lots of threats would be stopped and lots of others would be easily detectable and limited. But we would have to remove the users themselves if we want real security. Most flaws are in the meat, not the machine.

jlc3 June 9, 2010 1:23 PM

Actually – I’ve written war game scenarios that effectively take advantage of this, only on a large scale. So have some others I’ve worked with, like the NSA Red Team. That they were gamed is unlikely (I know my scenarios were tossed out as being ‘too realistic’). Actual shutdown of Washington was not an option. How hard? very easy actually – a couple of well timed phone calls, some redirection of some resources. Effort was nearly nil, and the overall effectiveness was immense.

We are, as you put it Bruce, easy to terrorize. As with just about any other aspect of security, nothing gets done until it hurts, and then the efforts fade as quickly as the pain.

jlc3 June 9, 2010 1:25 PM

Apologies – by the time I wrote my comment and submitted I realized it was for the previous entry in the blog. ::sigh::

N June 9, 2010 1:39 PM

Actually this DARPA program is explicitly not about networks; it’s about end-host security.

Christian June 9, 2010 1:42 PM

Sounds like a organic computing approach. Has organic computing ever succeeded in sth. like that?

Nature gets tasks solved though mostly in a suboptimal way.

Ax June 9, 2010 2:05 PM

Funny, I was doing something like this just last week albeit on a smaller scale. Coding adaptive/resilient libraries that need to reconcile tainted user input while not impeding system flow. It seems quite clever and organic.

what June 9, 2010 2:10 PM

Could someone write this in a even more meaningless hyperbole: “develop super-smart, highly adaptive, supremely secure networks.”

Juergen June 9, 2010 2:25 PM

“Clean Slate Design” and “translate human immune system strategies”?? My bullshit meter is in the red zone already.

Ross Patterson June 9, 2010 2:34 PM

Have these folks ever had the flu? The common cold? Cancer? Methinks the model is flawed.

Ah … ah … achoo!

mcb June 9, 2010 3:05 PM

You’d think an alphabet agency would know that the acronym resulting from Clean Slate Design of Resilient, Adaptive, Secure Hosts is CSDRASH. Calling it CRASH is like calling the “Defense Department Boondoggles and Very Expensive Advanced Research Projects with Low Return on Investment Agency” by the name DARPA.

Ben June 9, 2010 3:05 PM

Evolution works by incremental change, adding bits on and removing bits in response to threats and other changes in the environment.

Which is what is happening already, with DAC, MAC, DEP, PKI, AV, IDS, IPS and so on. What’s all that if not an immune system of “multiple independent mechanisms constantly monitor[ing] the body for pathogens”?

What is proposed here is what we already have.

Joe June 9, 2010 3:47 PM

@Christian and @Ross

Nature is phenomenally effective and efficient at handling the requirements of living. For example, the human brain runs on only a few watts of power.

That being said, even immunologic approaches to security will have flaws as the immune system makes mistakes. It can attack the organism it is supposed to protect, and other organisms can fool it into thinking they are not foreign (perhaps the molecular analog of social engineering).

Evolution is a constant dance between the attacker and the defender, which is what we have been seeing in security and will continue to see.

Brandioch Conner June 9, 2010 4:07 PM

I blame the people who refer to computer code as a “virus”.

It confuses people who don’t understand biological science or computer science.

Computer code is designed by the programmer. It is not a product of evolution. The security of computer code is limited by the knowledge and skill of the programmer(s).

RH June 9, 2010 4:57 PM

The advantage biology has is that partial information yields partially usable results. The first poster, for instance, calls for everything to be designed perfect. What do you do when someone NEEDS on the network NOW, but doesn’t have time to implement things? Modern security yields better results when you control the variables, but take away the control and mother nature has created some surprisingly resilient systems.

As an example, go design a network with the following principles:
– at least 10,000 machines, 1% of which are designed by “Eve.”
– You must be able to train someone to maintain the machine as nodes come and go, without your direct supervision. This person has a GED and is no hacker.
– All machines are in use, so there’s no room to move users around Eve’s questionable nodes… they have to be allowed to use them.

Most people I know would throw up at the idea of Eve’s machines already being on the network. But look up our B-cell and T-cell systems… its not a 1:1 match, but its suspiciously similar.

uhnonnymouse June 9, 2010 5:57 PM

Ross asks “Have these folks ever had the flu? The common cold? …
Probably. And like most people, they survived. Unlike computers that never recover from minor malware; where viruses normally permanently cripple machines.

Extending biological analogies fails at some point, but the fact is modern OS are fragile constructs. If DARPA can make them more robust then more power to them.

Section9_Bateau June 10, 2010 12:49 AM

re: Al;

Some are already working on it, but it takes time to do so on a shoestring budget with little time and support.

Our tenative goal is a general-purpose TCSEC B1 (augmented with a trusted path from B2) system, with limited functionality B2, possibly B3, for devices such as routers, switches, firewalls, etc.

As to the network, at least one person who is doing development is wanting to put more effort into architecture of it, but that person is more interested at the moment in developing methodologies to begin evaluating the effective strength of hashing algorithms, as we are finding collision rates are not at all what we would expect given the effective ‘key-space’ (I may not be using the right term, I am not a cryptographer)

Piet June 10, 2010 1:22 AM

As a biologist I would classify most computer viruses as retro-viruses, i.e. they insert their own code in the genome of the host. These are notoriously hard to get rid of in biology, just look at Herpes or HIV.

Nick P June 10, 2010 1:25 AM

We really don’t need this project. We already have most of the capabilities they are wanting thanks to about four decades of prior research and product development. Now, they just need to fund the application of what we already know for critical infrastructure. For instance, they could start funding middleware like TCP/IP stacks designed for high assurance evaluation. I think it would also help if they quit labeling EAL6/EAL7 products “munitions” and preventing their export. I think the open source and aerospace industries are currently doing the best at producing secure and robust solutions. They are producing actual, usable components that integrate into the existing ecosystem.

@ Section9

That’s twice you’ve teased us about this. B1 isn’t good enough, though. B1 with trusted path is fairly easy to build. We need systems like S.C.C.’s LOCK, Aesec’s GNTP, BAE’s XTS-400 or Green Hill’s Integrity that allow legacy or new apps to leverage high confidence, certified/evaluated core. The firmware and BIOS must also be exploit-free, with a trusted boot function. The device drivers can’t have bugs. Most secure “OS”‘s don’t address these well enough, save for maybe OpenBSD and Bitfrost.

Might start by implementing a Xen scheduling policy and medium robustness Dom0 on a EAL6-EAL7 RTOS like Integrity or GEMSOS. Design QubesOS-like cross-domain mechanisms that will leverage kernel’s mechanisms to prevent untrusted info flow. A PCMCIA card for trusted VPN and firewall for remote access is another easy application area. A commercial router platform for SOHO with high robustness RTOS, protocol stack, SPI firewall, and NAT system would be a nice plug-and-play defense. All of that software already exists, it just hasn’t been deployed on a router. Many application areas for B3-A1(EAL6-EAL7).

So, section 9, would you like to mention what group your in? What project is this? Is it modeled after an earlier effort? And why would they target the same EAL4+ (B1) standards everyone else went for? I assume you’re aware that B1 systems provide little assurance of proper implementation, have many covert channels, etc. If we can do so much with B3-A1 (EAL6-7) and B2 (EAL5 approx) and there are certified Linux-compatible OS’s meeting these standards, why would we want another low assurance B1 knockoff of a TCB? We should set the bar higher. If ancient Multics can meet B2 and be very useful, then today’s next system should be able to do at least as well.

Clive Robinson June 10, 2010 4:04 AM

@ section 9,

I’m with @ Nick P on this your model is broken before you even start so it’s an out right fail, sorry but that’s the long and short of it.

As nick P knows I’m not that impressed with MILS etc and see them as a stop gap measure at best.

That being said putting them into place will deliver big time and provided programers could be educated not to be daft or not alowed to do daft things (the latter being preferred).

What you need to do is realy go back to the slate and grind a new level playing field on it. That is ignore single CPU von Neuman architectures thay cannot be made even remotly secure.

With a multi-CPU Harvard architecture system with state machine security hypervisors doing what the von Neuman system does(ie load executable programs). You have a base architecture on which you can work.

Have a look at my previous blog posts with Nick P about “Prison-v-Castle” implementations.

Clive Robinson June 10, 2010 4:46 AM

@ Section9,

With regards,

“… methodologies to begin evaluating the effective strength of hashing algorithms, as we are finding collision rates are not at all what we would expect given the effective ‘key-space’ ”

Oh dear…

Key-Space issues are something that are going to keep coming around and chewing our tails long after they have nailed the lid down on both of us.

There are several issues that have to be dealt with.

First of is the “Magic Pixie Dust” problem (for which I blaim Intel Corp engineers). Key-Space requires real entropy no deterministic algorithms can give you entropy. The best a hash function on it’s own can do is what is effectivly a code book encryption…

Nobody uses code book encryption in properly designed systems because any given plain text always produces the same cipher text it is JUST a simple substitution cipher. All be it with a with a very very large (faux) Key-Space.

You can improve things by cipher chaining which should take the use of a Hash a bit further. But unfortunatly it tends not to…

Most people came across cipher chaining with DES and assumed from the original DES spec that this “was good” well it was but only in specific cases not the general case (which usually gives short cycles). Hash functions have been noted for problems in this area Bruce has refered to it in this blog on the very odd occasion.

So I would personaly not use hash functions in generating keys if I wanted to fill the Key-space. You would actually be better off using AES in CTR mode followed by AES in a cipher chaining mode, if you where just using determanistic algorithms.

However what you need is real honest entropy and this is a very difficult problem, for a whole host of reasons that would easily fill several books (and does, across many fields of endevor, which is why you need a polymath to design good quality RNG’s).

For instance you can have two or more supposed TRNG’s that individualy pass any statistical test you care to throw at them. However if you co-locate them and compare their results they will fail the same statistical tests… There are a number of reasons for this and you might have a greatg deal of difficulty identifying why. Some reasons are to do with “coupling” others are due to unrecognised bias others are down to incorect assumptions of what deterministic algorithms will give you.

I could go on at considerable length about how you can take a little entropy and spread it but you would do better looking at the history of entropy pool systems and why most have failed.

Section9_Bateau June 10, 2010 6:08 AM

As someone who has actually used a Gemsos based systems (I have not worked on any other high-assurance systems) to say they were general-purpose is a joke.

Roger Schell actually approached a friend of mine at a conference where we brought up having a MLS FTP server based on it, hoping we would be willing to develop something to make it more general purpose, a web browser. Sadly, he wanted to charge us a rather ridiculous price for his SDK, even if we agreed that Gemini would own all the code we developed. That was enough to kill the project.

I know of Multics’ achievement, hell, I have a copy of the paperwork from the NCSC reguarding it’s certification. As anyone who has visited the site I have placed online is aware, I have a rather large code archive of it. The issue that I have with it is that it was too dependent upon using a mix of hardware and software protection, tying it to hardware that died off.

I also know of at least one case a Multics system was owned when someone subverted the patch-distribution process, and there was a recommended physical change that was applied at a few sites that allowed bypassing of some security measures (it was before my time, so I do not know the full story)

Personally, I would LOVE to release A1/EAL7. Honestly, it is one of my life goals. However I have never seen a general-purpose system at that level. Multics is the highest at B2, and with the amount of hardware threats one must assume, I do not know if something B2 or B3 could be mass-produced industry-wide the way x86/amd64/SPARC systems are. Another issue would be making it run ontop of hardware acquired through mass-distribution. For the average home or business, general purpose user, I believe that even a B1 system would be a massive improvement.

I am hoping to do active development in the near future. At present, the system and design are being discussed and researched among a group of a few friends, some other security researchers, some mathematicians with interest in computers (one or two people with hardware experience) in our spare time. We all have day jobs or study, so we can’t devote as much time to this as we’d like.

Dave June 10, 2010 7:33 AM

“develop super-smart, highly adaptive, supremely secure networks”

Translation: Instead of good, solid engineering work using established methods to build a secure system we’re going to throw millions of dollars at esoteric research to entertain a couple of academics and some grad students for a few years.

kashmarek June 10, 2010 10:05 AM

Ben indicated “What is proposed here is what we already have.”

Execpt it is multiple independent. What DARPA wants is single, dependent. That is, one all-controlling entity, totally top-down, totally authoritarian.

Nick P June 10, 2010 1:29 PM

@ section 9

When speaking of GEMSOS and others, I’m referring to the fact that they can be applied to many existing problems, mostly embedded. Some of these embedded solutions can supplant PC security. I wouldn’t dare claim they are general purpose and I suspected their licensing costs was one reason for low usage.

The combination of hardware and software isn’t as bad you might think nowdays: FPGA’s make dedicated hardware enforcement cheap and easy. Air Force is using FPGA’s to encrypt RAM & isolate VM’s. The LOCK system was a UNIX PC on trusted kernel with a hardware coprocessor (SIDEARM) that sits between CPU and memory/devices to do access control. That could be done with FPGA. Anyone wanting custom processors and firmware for high security embedded device could potentially develop & deploy them on FPGA’s. I think there are even write-once FPGA’s, so the main TCB could be non-bypassable.

I agree that a whole general purpose OS isn’t going to hit B3/A1. To be more specific, when I say B3/A1 I’m strictly focusing on the OS & trusted software. Building a whole B3/A1 system is too expensive for either of us and requires knowledge that’s actually classified (think TEMPEST). I’m just focusing on whats in kernel mode or trusted elsewhere. The existing high assurance efforts basically run a high assurance kernel with some protection and isolation mechanisms, then run legacy apps in a paravirtualized layer and security-critical apps in a process directly on top of security kernel. If you want some [implemented] examples, see Micro-SINA IPsec VPN, LynxSecure (uses Intel VT), Integrity-178B (certified), Nitpicker (secure GUI), OKL4 (capability-based & open source), seL4 w/ CAMKES support, MK++, and Green Hill’s TCP/IP stack. These show a viable approach to reducing TCB size for arbitrary applications, while strengthening confidence in their implementation.

I also appreciate your site. It has plenty of nice links. I will be looking into the secure DBMS paper later. If your wanting to develop trusted OS, I’d recommend hitting IEEE and ACM up for papers on “high assurance”, “MLS”, “covert channels”,etc. They have a nice collection. I was recently using them to get all the info pertaining to A1 system design, esp. covert channels. I’ve found that covert channel elimination is somewhat of a black art and the proven methodologies aren’t well-known. I’m currently studying an RAD form of high assurance that could be applied in open-source. You could say it’s a more realistic version of EAL5-EAL7 requirements that focuses more on practicality than paperwork or government-centric requirements.

I’m also trying to decide whether or not to commit to an A1-class software product in a market that doesn’t care. If I did, it wouldn’t be profitable, so I should build a component that would last and be usable in real systems. Maybe a firewall, TCP/IP stack (bottom-level), VPN device, or even user-friendly SCM system for future high assurance projects. I’m probably going to use Z/Statecharts, Coq/Isabelle, and OCaml/Ada/subsetC++ during development. The most important thing is picking the right requirements. The rest just helps put them into action.

Lawrence D’Oliveiro June 10, 2010 10:10 PM

Given that about half the ailments people suffer from these days seem to be autoimmune-related (e.g. allergies, asthma), I wonder what a human-immune-system-based network would look like…

thinker June 11, 2010 2:52 AM

implementing security modeled after immune systems is always funny until someone catches an autoimmunity …

just saying

xAx June 11, 2010 3:38 AM

@Section9_Bateau:

I do not know if something B2 or B3 could be mass-produced industry-wide the way x86/amd64/SPARC systems are.

Now that you can have a PC simulating any other processor/system, you can have a virtual machine with correctly defined assembler instructions, with simple enough interface to devices, with hard memory protections at local level, and a reasonable computer language.
Advantage is that you can optimise the system before producing the first hardware.
Problem is usual in this market, that is a lot of work – and people who would do that work would be fired before developping the idea, just after the patents have been accepted; no “virtual machine” would ever be built – just use the quickest way to make money without investing too much.

Nick P June 11, 2010 7:50 AM

@ xAx

Well, it’s a nice idea and the virtual machine concept has been tried in practice. The existing high assurance, general purpose systems are done via virtualization. Language-level security for Java and .NET exists in a virtual machine. Systems like Qemu let a person test untrusted executables. Instruction-set randomization techniques often use a virtual machine whose instructions require a key to unlock. Ten15 used a VM to implement capability-based security at the language level. So, it’s definitely doable and it’s being done in various regards.

The main problem is attack surface. In a B3 or A1 system, the design techniques are about reducing the size of the Trusted Computing Base (TCB) and proving its correctness. In your scheme, we have parsers, JIT compilers, OS integration, hardware integration, etc. All of this must be done correctly without exploitable bugs. In addition, adversaries can often beat a virtual machine by attacking processor, firmware, BIOS, OS, or virtualization layers that the scheme depends on. So, while you can simulate a secure architecture, in practice the implementation of the simulation can more than triple the TCB and open lots of holes. High assurance is some damn unforgiving work.

In spite of these objections, I have considered this a few times and still might implement it one day. My friends and I were joking about building a rock-solid Alpha emulator to run the B1 VMS or maybe duplicate the SCOMP system. I’ve also thought about resurrecting Ten15 VM and targeting one of these high-level, info flow aware languages to it. I also keep toying with making a simple x86 or PPC compliant processor core that, say, doesn’t have 100+ bugs. 😉 It could run on an FPGA in beginning. Right now, the best medium robust general-purpose solution is a customized/hardened build of OpenBSD. Others with potential include NetBSD, modified Linux, QNN, Integrity, or a sep kernel-based virtualization (yet to be proven).

Brian Tung June 11, 2010 6:28 PM

I’m all for this, provided there’s sufficient follow-through. That some people can’t see the point in it isn’t sufficient reason to oppose it. It’s research–most research projects have detractors who, before the fact, couldn’t see the point in it. But DARPA can’t abandon it, and moreover has to have someone who knows something about adaptive systems to guide this program knowledgeably.

DARPA has gone this route before. About 15 years ago, I proposed a project for information system survivability that included a portion on using a large array of very simple and loosely coupled finite-state machines to make resource-allocation decisions when a system was under attack. We argued that the system was quasi-biological in the sense that (a) there was no explicit rule base but instead a feedback loop that sensed improvement or degradation and modulated its allocation accordingly, and (b) the resource allocation “decisions” could only be understood as the collective action of the finite-state machines and could therefore withstand the demise of a substantial number of them. We thought that the argument was reasonable and apparently so did DARPA, for they funded the project with that portion included.

There was another portion they were apparently more interested in, though, which was a standard specification for intrusion detection interchange; this was one of a few seeds for CIDF, which eventually inspired the IDMEF work in the IETF. Eventually, the program manager decided that they wanted us to spend more of our time doing CIDF work; shortly thereafter, they cancelled the adaptive resource allocation portion of the project.

Lest you think this was just a referendum on our work in particular, less than a year afterward, the DARPA intrusion detection body of work took a very strong turn toward integration. To my chagrin, basic intrusion detection research took a back seat to tools for correlating results from detectors that weren’t fully baked because they hadn’t had enough work put into them. Integration is good–tools should be able to talk to one another–but doing it in lieu of making the tools better is classic cart-before-the-horse program management.

It’s too bad. I think that there could have been some useful ideas come out of that biological metaphor, but because there wasn’t someone at the top with a clear understanding of which concepts translate out of that analogy and which don’t, it just went out with a whimper. I sincerely hope it goes better this time.

Dave June 12, 2010 12:38 AM

I’m also trying to decide whether or not to commit to an A1-class software
product in a market that doesn’t care.

It’s not that the market doesn’t care, it’s that really secure systems have a proud tradition of utter unusability, and that’s why they fail in the marketplace (well, alongside their very high cost), not because people don’t care about security. As one analyst with a lot of experience in secure systems put it, “You want to write up a report with the latest version of Microsoft Word on your insecure computer or on some piece of junk with a secure computer?”. It’s not contest. If your super-secure system won’t run Outlook, IE/Chrome/Firefox, and Word, it’s not going to succeed in the marketplace.

Nick P June 12, 2010 1:11 PM

@ Dave

They were largely unusable but many medium to high robustness systems are plenty usable. Systems like INTEGRITY Workstation and Tenix Interactive Link provide highly robust information flow prevention for a lower cost than manual, air gap methods. The market for INTEGRITY PC/Workstation has been low since it was introduced years ago. I figure pricing and features have something to do with this, as you suggested.

I dispute that people care about A1-class security. My theory is that what you care about is defined by your actions, not words or fleeting thoughts. People’s actions show that if a system is slightly more convenient or cheaper than the secure system, then the majority of people are likely to buy the insecure system. Customers also consider security a lower priority and resist changing their infrastructure until digital Death is knocking at their door.

So, I repeat my assertion that human/organizational psychology and economics means that there is no real market for high assurance systems in non-mission critical applications and even those often go for medium robustness solutions. Anyone wanting to produce a successful product today should focus on keeping user experience the same, including many buzzwords, getting a few high profile customers to brag about, and then put at least half the capital into marketing/branding. People will use the hell out of that product because it works on system X, has feature X, is used by group Z, etc.

Dave June 13, 2010 4:19 AM

They were largely unusable but many medium to high robustness systems are
plenty usable. Systems like INTEGRITY Workstation and Tenix Interactive Link
provide highly robust information flow prevention for a lower cost than
manual, air gap methods.

But what makes (say) Windows Windows isn’t the kernel but the vast, enormous mass of stuff built onto it. When I mentioned Outlook, IE, and Word, it was a deliberate choice, you can send HTML in Outlook that’s rendered by the Trident engine (from MSIE) with an embedded Word doc that’s shown transparently. Assuming Integrity 178B could be adapted to run software like this, it’d either block this kind of thing as being insecure (which means people wouldn’t use it) or not block it, in which case it wouldn’t be much better than Windows. So the issue isn’t “can we build a secure/reliable/whatever kernel”, it’s “can we provide all the additional services and functionality that would make people want to use it”, and that’s the killer, not the choice of kernel.

Clive Robinson June 13, 2010 6:55 AM

@ Dave,

As you rightly point out the issue is much much more than just the low level bits.

It covers the whole of the stack and then some.

Even if you tag objects with security indicators you still have the agrigate problem of unclasified items effectivly generating meta data about the agrigate that is clasified (ie a colection of downloaded public domain documents showing the direction of reasearch etc).

But as with any journy you have to start somewhere and as with building houses etc solid foundations allow for a solid houses to stand the test of time. Whereas a solid house on poor foundations in shifting sand is not going to last very long at all.

My personal view is security starts (and can end baddly) in the design of the silicon, and good segregation via real world physical means with easily monitorable choke points is mandatory to good security.

However I’m sufficient of a realist to realise that we have gone to far down the wrong road to easily turn back easily. _hus we need the likes of DARPA to kick start work on clearing a better path, and once sufficiently well travelled hopefully people will prefere this new more secure path than the old insecure path.

It realy does need the likes of Intel or AMD to see a clear financial advantage for general purpose systems to become more realisticaly secure at the silicon level. Otherwise we will have to cross our fingers and hope with assumption laden software proofs.

And this is only likley to happen if Intel and AMD see a clear market into which they can profitably sell. DARPA is one of the US agencies that might be able to pull this off but I’m doubtful.

Nick P June 13, 2010 2:48 PM

@ Dave

It’s certainly a systems thing. I’m just illustrating pieces of software that were built to high assurance standards in such a way that other components can leverage their security mechanisms to boost their own security. A desktop computer like Windows would probably use type enforcement (e.g. SELinux) or capability-based security. On the latter, CapDesk has shown that capabilities are very effective at enforcing POLA while providing a seemless user experience.

Your argument almost missed the point, though. Sure the user might want to render this, transparently do this, and so on. No doubt the functionality will be necessary to satisfy users. Here’s the question: why did an exploit of trident give an attacker total control of a computer? why are buffer overflows even happening when they are easy to prevent? The issue is that current systems give applications way too much privilege. A capability- or TE-based design would make life more difficult for attackers, rather than boosting their performance. A high assurance kernel could be used for isolation and capability enforcement. Thanks to DO-178B, there are also more robust networking, graphics, sound, USB, and CORBA stacks.

However you do it, there’s no technical or even practical barrier to POLA enforcement in modern OS’s. Different attacks are reduced or mitigated entirely with performance drops in the single-digit percentage. The OS and app developers just don’t care, mostly for economic reasons. The government could also be doing more. Specifically, they should fund the creation of high assurance components that could plug into commercial systems. The important thing is that they develop secure, useful software and then release it. Spending $25+ million to build an A1 product, then holding it from the public is just intolerable.

Robert June 13, 2010 10:25 PM

@NickP
“The government could also be doing more. Specifically, they should fund the creation of high ….”

Why? If industry cared about security they would gladly pay to develop it, and thereby achieve a differentiated product and a real sustainable monopoly?

When gov’t funding pays for a project the result is one of the following:

  • Obscurity and virtual monopolies for the “secure” computer producers, but small volume and limited exposure to real hacking attempts.
  • Ubiquity with no profit for anyone =(made in China)

    The Ubiquity result is likely to be the worst outcome because it is truly feau security, once the solution is sufficiently commoditized there is no margin in the product, and without margin the race to the bottom is guaranteed, and at some stage the security gets optimized out of the product.

    If you need proof of this look no further than the Security microcontroller divisions of both Infineon and Atmel both have the lowest gross margins and both business units are “for sale”, so they will likely become sections of one of the fast raising Asian companies.

    I would argue that the first outcome, of system monopoly really just becomes security through obscurity. I accept all the EAL 4,5,6… are increasingly more difficult and inherently more secure. BUT in most cases the hardware security is at most EAL3. The more popular these systems become the more you have to expect that hardware hacking will dominate the game. As has been proven multiple times attacks on the ALU integrity will render useless, almost any encryption system.

    Take the suggestion that FPGA’s are suitable for secure computing!
    It is patently a absurd assertion, especially for the RAM based FPGA’s. What underlies this belief, is that FPGA abstraction is “too complex to understand”. Basically the FPGA load bitstream is too removed from the security problem for anyone to attempt an attack at this level. These people need to be introduced to “functional verification” before they write another word.

In closing, I’ll side with Clive, that real security has to begin with the hardware. Program and data space must be separated and the system needs at least two levels of Hypervisor.
The on chip RNG need to be much more secure, frankly it is a joke what passes for RNG testing today. I want to see serious PSRR (power supply rejection ratio) spec’s along with substrate and parasitic coupling / locking tests for the RNG primitives.

Frankly if there is no entropy in the RNG then the rest of the system is an even bigger joke.

Nick P June 14, 2010 1:20 AM

@ Robert

Your two scenarios seem right on to me. Because these are pluggable components, I’m fine with the monopoly scenario. Someone wanting security could pay for it and if the monopolist tried some crap the component could be swapped out later. The interfaces would all be industry standards and open, so this would be easy. I also find the second scenario acceptable as well: the point of government-sponsored creation of high assurance components is ubiquity at little to no cost. Businesses will still make plenty of money extending, integrating, etc. An example would be a verified IPSec stack designed to integrate with numerous OS’s. But you’re correct that the components themselves won’t be profitable.

Physical hardware security is a non-issue in many application areas. I don’t know if you’ve looked, but much “high assurance” software assumes is physically secure. I don’t know a single product that’s tamperproof and those that are tamper-resistant enough don’t really meet performance or cost requirements. So, my idea for government-sponsored components would have the same assumption: trusted personnel using it; physically secure location. There’s no other choice without radical (and costly) hardware developments. Many good security devices just need to protect against remote attacks. EAL3-or-less physical hardware security isn’t a concern. I’m only concerned with remote attacks like processor errata, faulty drivers due to spec errors, etc.

On the subject of FPGA’s, I’ve said before I’m not a hardware guy. I mentioned them as a secure computing aid only because the US government is using them in numerous research projects and prototypes. The HAWCS wireless system uses it as a key security enforcement mechanism and the Air Force has a scheme to improve separation of Xen VM’s by using FPGA’s for memory encryption and PCI device access control. There’s also formally verified encryption IP, plenty of useful non-crypto IP to start with, and even write-once chips. So, I see the government boys really getting into FPGA tech and had to mention it.

Btw, you seem pretty knowledgeable in the hardware area. What’s so bad about FPGA’s? Why shouldn’t we use a write-once, COTS FPGA to implement say an in-line disk encryption system or LOCK-style type enforcement MMU between processor and memory? If we aren’t concerned with hardware attacks requiring physical interaction, what threats are we looking at?

I agree with you guys that different hardware approach is the best route. Besides being a non-hardware guy, I focus on software because whatever I build has to use cheap COTS hardware and integrate with existing applications. That imposes certain limits. I think Snow of the NSA highlighted the issue well: on every level modern computers were designed for sharing, whereas security enforcement requires separation with mediated sharing. I’m less worried about the RNG’s and more worried about a secure TCB. I think as a midterm solution, we can just build a higher quality version of something like Intel vPro, layer on high assurance kernel/middleware, and then use VM’s and isolated apps. These are already on the market for a pretty penny. Truly secure computers will require fundamental redesigns, though. I think they are less likely than govt-sponsored high assurance components.

I’m personally toying with using a bunch of little SBC’s wired up, with one acting as a message router and one for the user. The user SBC has exclusive access to HID and can launch apps, set privileges, etc. on the others. The SBC’s would be used much like the VM’s in Qubes OS. Initially, the OS would be a hardened Linux with trusted boot, kernel write prevention (SecVisor), and certain services. Further driving costs down would be using small PC’s like VIA Artigo, which cost about $300. The system might include different types of boards, like one with a Core Duo 2 and one with a VIA C7. They could all fit into the space of a typical PC tower and cost under $2,000 for the base system with four execution nodes if Artigo’s are used. The security policy is separation and cross-domain transfer according to user’s policy. Assurance goal is medium robustness against non-physical attackers. I’d mainly use it for isolating/controlling things like web browsers or firewalls, in addition to executing untrusted software.

Robert June 14, 2010 3:50 AM

@NickP
I guess we are just looking at the security problem at a different level.

For me hardware access security is impossible, I have to assume that the user / attacker will do whatever is within their technical capability to interfere with the CPU especially if they can profit from this activity.

When I think of multiple CPU’s I’m really thinking of single chips computers like say a quad ARM or MIPS system, with inbuilt hypervisor based on a hardware state machine. I’m thinking of cost models of under $20USD. and applications like secure payments mobile browsing smart appliances, smart grid ….

IMHO if anyone can do this, than these secure computing blocks will be the interface for much less secure processing units. GPU or whatever, so security belongs at the edge of the application space.

I guess FPGA’s are OK if you assume that physical security is not a problem and that hardware hacking is highly improbable. My biggest concern with FPGA’s is that there are just too many ways that I could change the function very slightly, so that it would give me a way to compromise the security. Unfortunately without extremely close inspection, it is very difficult for most people to tell if the achieved encryption is that of a 16bit key or a 1024bit key.

Clive Robinson June 14, 2010 7:53 AM

@ Robert,

“IMHO if anyone can do this, than these secure computing blocks will be the interface for much ess secure processing units. GPU or whatever, so less secure processing units. ”

I believe it can be done but not with monolithic software, and it gets some advantages for free.

From the hardware perspective imagin a very simple (sub-RISC) CPU as a basic building block. It has a very simple memory buffer window to it’s controling block.

It’s basic IO is simply a stream of words through the buffere which act as a choke point. It has no control over it’s memory this is done by the hypervisor system vvia the controll block (essentialy it’s an MMU controled by the hypervisor). Because self modifing code is a risk that is not required and it does not need to load programe code or data a strict Harvard architecture fits naturaly.

The amount of memory given to the CPU by the control block is kept to a minimum to do a specific function. The function blocks are small and have well known behavior both in memory and a well known signiture. These signitures are known to the control block and the hypervisor.

The function blocks are the equivalent of statments in a high level language similar in concept to a Unix shell script.

The CPU is effectivly a “prisoner in a jail cell” what it can do what it knows and everything it does is monitored by the control block and it’s hypervisor.

The control block is a state machine controled by the hypervisor thus unlike the CPU it is fully deterministic.

From a programers perspective they are writing in a script language and the Hypervisor does the behind the sceans plumbing, to pass code from one statment (function block) to the next.

The result is malware comming from outside has no where to hide or execute. The function block code is writen by trusted secure code programers and each function block has it’s own security signiture which the hypervisor uses to detect abnormal functioning.

This means that the ordinary programers do not need to be security aware at the low level or for that mater at a higher level. Because the code written is of a much higher level it is harder to hide malicious code in what they do.

Data is effectivly “object based” and part of it is a security tag that the hypervisor knows about.

It is not perfect but it’s a lot better than we have with monolithic CPU’s and Monolithic code running on them.

Thus,

“… so security belongs at the edge…”

Is true but at the line by line of code level, not the application level.

There are several other advantages to the system that help kill the bandwidth of covert channels to fractions of a Hz without overly effecting the performance.

Have a think on some of the other advantages (or cheat and go back to Nick P’s and mines blogs about “Castle -V- Prison” design.

Nick P June 14, 2010 1:54 PM

@ Robert

Yeah, it’s really beyond my knowledge to build a truly secure mobile device that considers physical attacks. There’s just so many and most seem to contradict the requirements of the device. For instance, the kinds of shielding to defeat passive or active tempest attacks make the device bulkier. More hands on attacks might only have obfuscation as a defense, which means extra, unnecessary work that’s draining the battery. (well, probably what it means) If we can build a secure mobile telephone or something, it will probably fit in a briefcase. 😉

@ Clive

Sure, sure. You know what’s going to happen though: your prison will just end up being a room in my castle. It will be seized and utilized to the king’s (err, user’s) benefit. I could see a software prison approach that relies on a virtual machine that enforces access control and resource limits based on a signed (and checked) config file that comes with the application. Think Java and it’s standard library, but each library is an “object” whose accessibility is restricted to specific “subjects.” The application itself and its functions would be subjects. To prevent having to track every single object, we’d probably do info flow security based on types like in Type Enforcement (though simpler). Capability-based security is also possible but the VM might have to run in Ring 0 or Ring 1 to use MMU to enforce confidentiality/integrity of capabilities.

Dave June 15, 2010 1:44 AM

On the latter, CapDesk has shown that capabilities are very effective at
enforcing POLA while providing a seemless user experience.

I was going to mention CapDesk and security by designation in a followup… is there any work in this area still ongoing? It seems that that’s the sort of thing that should be getting funding.

(You also need to be careful when you throw the term “security” around, from looking at some of these posts it seems people have very different definitions of the term, one person wants a high-security system resistant to hardware-level attacks, someone else wants a high-assurance system, I just want something that won’t be overrun with malware at the first opportunity and don’t care about resistance to hardware-level attacks or whether it’s got a CC rating or not. So maybe when people argue for solution X they need to also state what their threat model is).

Robert June 15, 2010 2:20 AM

@NickP,

A chip with quad MIPS CPU’s and a bunch of other dedicated hardware functions will definitely be less than 7mm7mm die area with a package size of about 25mm25mm.

Individual uC controllers (RISC cores like PIC1657) take about 80um * 80um per core with 2KROM *256B RAM. In 40nm processes, a core like this can easily run at clock rates of 1KMIPS. So it is possible to have one dedicated uC per bondpad / BusIO. Each uC can run asynchronous to the main core so that I/O functions have complex timing relationships to the main CPU. (requires multiple clock domain buffers but helps with Tempest) additionally each uC can strictly enforce protocol limits on comms / protocols through the attached bondpad.

This style of device defeats most of the attacks at the edge, buffer overruns simply cant work, you either loose the extra data or the data wraps around and writes over the initial data, but in noway does it destabilize the core CPU.

In the case of realtime sensitive I/O these edge uC can be used as smart TimerCounters and intelligent Interrupt controllers. Basically these smart I/O are intended to eliminate “out of spec” I/O signals from adversely effecting the main CPU operation.
Emsec and Tempest attacks, are very powerful techniques. However there are ways to defect most of these attack vectors. In principle the smaller you make the PCB the harder it is to find long traces that work as Emsec antennas. Additionally On the chip I’d add double guard rings and and DeepN well / NBL Isolation to all I/O. This costs die area but fixes most I/O current injection issues. It is a technique only really used in Automotive and powerIC design but it works equally well for Digital CMOS.

The real key is to security is the hypervisors, these elements must be “state machines” and must be protected from invasive attacks and FIB modification. To do this you need at least 2 active Metal shield layers and a lot of secure logic design techniques (check http://www.lightbluetouchpaper.org/ for some of the simpler secure techniques). Frankly the best new secure design techniques are best kept secret. Basically security by obscurity is alive and well.

Clive Robinson June 15, 2010 4:58 AM

@ Robert,

“Frankly the best new secure design techniques are best kept secret.”

For good and proper legal reasons (patents etc)

“Basically security by obscurity is alive and well.”

But not for “security reasons”

Nick P June 15, 2010 8:17 AM

@ Robert

Thanks for the info. I’ll add the recommendations to my records and investigate it if I do work that can utilize it.

@ Clive

“but not for security reasons”

Are you so sure? I think obfuscation is one of the few decent security measures in the hardware arena. A team of good security engineers can spend a ton of time and money to design a secure piece of hardware. It will be really good. Or a lesser team can spend little to obfuscate a design to the point that attack is impractical. A combination of both is the best practice for a simple reason: designers often make mistakes and new attacks often appear that invalidate assurance of a device. Obfuscation helps in this case.

I think Windows is actually a good example of the power of obfuscation. We all know that OS has way more exploitable bugs than attacker’s have found. Security experts say the lack of source just slows attackers down, reducing the number of bugs they can find or increasing time to do it. Well, whatever name they give it, the obscurity ensured Windows only had twice the discovered vulnerabilities per annum even though it probably had 100+ times the number of attackers. Apply this lesson to hardware and secrecy can help prevent the exploitation of a small mistakes, temporarily or permanently. I’m sure this is why NSA keeps design of Type 1 alogithms/protocols secret. So, secrecy isn’t security, but it does reduce risk when combined with good design.

Marc Espie June 15, 2010 11:08 AM

@Brandioch Conner: have you looked at computer virii recently ?

polymorphic code and genetic algorithms… a lot of those tricky little beasties use every trick in the book to stay under the radar of the good guys.

We have reached the stage where you can hide enough code in a binary to actually get really close to biological techniques, such as recombining instructions, changing the decryption code, or in general, doing the same thing in different enough ways to placate every virus checker…

Clive Robinson June 15, 2010 11:23 AM

@ Nick P,

“I think Windows is actually a good example of the power of obfuscation. We all know that OS has way more exploitable bugs than attacker’s have found”

Agh there is a difference between “security by obscurity by design” and spaghetti code spun by rodents making a nest 😉

With regards,

“I’m sure this is why NSA keeps design of Type 1 alogithms/protocol secret.”

Actually there is a simpler reason the NSA has effectivly only two tasks,

1, Protect the communications of the US.
2, Read the communications of all other nations.

Thus they have an interesting problem. The notion that anything they design will fall into the hands of a hostile nation at some point in time is not lost on them. In truth it has happened many times, which means that it will be back engineered.

Thus if the put the best there is in their kit they may well for fill part one of their mission but that means they will fail part two.

One solution to the problem in the past I’ve mentioned a few times but people realy have not picked up on it and that is ciphers of variable strengths…

Back in the days of mechanical field cipher machines the strength of the cipher text to attack varied with which key you used.

Now if 10% of the keyspace is obviously weak 20% ludicrously weak under one attack 20% also ludicrously weak but to a different attack and so on with only 10% of the keys being strong against all the known attacks.

As the NSA is responsible for key scheduling they can ensure that US entities only use keys from the 10% of strong keys.

If another nation captures an NSA system and back engineers it to produce their own system then the chances are they will only know one or two of the attacks but not all of them. Thus there is a fare chance that 50% of their traffic under the system will be easily breakable…

We know that the NSA gave loaded technical advice to a well known Swiss Crypto Company and we know that the Haglin and other mechanical ciphers used upto and including the Korean war definatly had variable strength key spaces.

We also know that DES was never supposed to be made in software but only in export controled chips. The DES design (Input Perm etc) was obviously designed to be difficult to do in software but fairly easy in large real estate (for the time) silicon. We subsiquently learned from DES that things like differential attacks where known to the NSA and kept quite untill it was obvious.

Then we move to CapStone etc not only was the cipher secret but as we subsiquently know designed in a very special and brittle way.

It became clear after the design was published that even very very minor changes to it’s design had significant detrimental effects on it’s strength.
So the question is what about AES do the NSA have tricks to break it that are not currently known? Well the argument that “If anybody can they can” may well be true.

However does it need to be. The original AES design did not cover things like the effects of “loop unroling” and “cache attacks” and all sorts of other side channels.

Then there is the likes of Microsoft, why oh why do the put such large amounts of “known plaintext” at the front of all their office files?

Which is one of the reasons I suggest to people that they first split and swap a file at a known pesudo random point, before zipping it up then stream encipher (CTR AES with post chaining will do) before AES encrypting using one ot the newer chaining modes.

Which brings me back to your point,

“So, secrecy isn’t security, but it does reduce risk when combined with good design”

That is true but sometimes the “risk” being “reduced” may not be the one you think it is as is the case with the NSA type 1 systems.

Tom Van Vleck June 15, 2010 8:13 PM

I wish I could get in on this. A new OS, on new hardware, in a new language: I have been part of teams that tried this several times. I hope some good work is done as a result of this BAA.

Nick P June 16, 2010 1:21 AM

@ Tom Van Vleck

So, what did you guys try? Any published results or something that might have some [even academic] value today?

Nick P June 16, 2010 1:36 AM

@ Clive

“Agh there is a difference between “security by obscurity by design” and spaghetti code spun by rodents making a nest ;)”

Point well taken. 🙂

“Thus if the put the best there is in their kit they may well for fill part one of their mission but that means they will fail part two.”

Yes, this is the crux of the problem. The NSA is actually schizophrenic by nature, suffering from multiple and conflicting personalities. Every now and then they come up with a win-win strategy that works for a while, like that damn Swiss company or more recently the ECC crypto RNG in Windows. This problem is also why they do things like classifying EAL7 platforms as munitions, preventing their export. As a side effect, they destroy the high assurance COTS market and leave only their GOTS products.

“If another nation captures an NSA system and back engineers it to produce their own system then the chances are they will only know one or two of the attacks but not all of them. ”

This is where our paths diverge. I don’t think this is a viable strategy for real security. Reverse engineering is easier today than ever before and chip security is cracked on a regular basis. Anything NSA produces wouldn’t be trusted. It’s one of the reasons Twofish and even Blowfish are still so popular today. NSA still has a chance of using your scheme when subverting commercial cryptosystems. They have a nice track record in that area and did you know that the Swiss company is still in business with defense & high risk commercial customers worldwide? Can you say “intelligence goldmine”?

Clive Robinson June 16, 2010 4:35 AM

@ Nick P,

“This is where our paths diverge. I don’t think this is a viable strategy for real security.”

nope we are still on the same track. I don’t think it’s viable any more so we agree on that 😉

Also I think the NSA and others realise this as well. There was Bob Morris seniors parting comment about known “plaintext” (hence my MS comment). And that (sounds to good to be true) comment from an unnamed NSA bod about the speed of progress in the public/academic sector.

However from a historic perspective it is what the NSA (and others “allies”) have done in the past and old dogs tend not to learn realy new tricks.

And if you think about countries outside of the so called “WASP Nations” (I hate that expression) of the BRUSA agreement and Northern Europe (I include Russia et al in that) crypto is not realy progressing at the same rate (yes there are odd exceptions ie Israel and some East Asian countries).

And as the mineral resources that WASP Nations exploit are mainly at or below the equator these days I suspect “the old game is still afoot” one way or another.

I think it safe to say that our wars are now likley to be about economic exploitation, and on that score the Chinese have us beat already (see what they are doing in terms of “economic aid” and “education aid” in Africa etc).

As you and I both know AES has it’s own very real “implementation” issues via side channels and, as you say the ECC crypto RNG issue tends to suggest the NSA have stepped back from “weak crypto” to “predictable entropy”.

Then of course there is “black box” PK Cert generation and the unfortunate problem of keleptocryptography (Adam Young & Moti Yung) all of which sugests the game might not have stopped just moved to a different field (key generation).

One thing to keep an eye open for is the myth of key schedualing algorthms have to be slow to reduce the speed of a “brut force” or “British museum” attack on the key…

Let me put it this way,

1, Slow key changing means more “known plaintext” under “the same key”.

2, Also slow key changing combined with poor entropy means that “bad / broken keys” “hang around way longer than they should”….

None of which the NSA would object to in their “secondary mission” to be “ungentlemanly” with respect to others private corespondance.

Then there is the issue of “side channel” attacks and I’m not talking just the usuall view of TEMPEST being EMC on steroids.

That is just a very small part of it. Have a think about “active” “fault injection” attacks where even a simple unmodulated RF carrier can take a TRNG from 32bits of entropy down to the equivalent of 8bits without it being obvious to the user.

As I independently discovered back in the 1980’s microprocessors can have “special faults” induced by modulating the RF carrier with a fault injection sequence. When allied to using a frequency “tuned” to a particular physical property of the PCB etc it alows it to be directed against individual circuit traces.

And importantly due to the effects of cross modulation the active circuitry will modualte an unmodulated carrier and produce modulated harmonics so that a receiver can be used to get the correct “trigger point” in time for the fault injection.

Thus it should be possible to not just reduce the entropy of a TRNG but actually make it output very close to the bit sequence you want, which makes a key search fairly easy when combined with known plaintext at fixed offsets.

Others have started to notice odd effects like TRNGs that appear non determanistic on their own but when built into CPU’s running the same code they have an oddity that when you compare the two outputs they are very very less random than you would expect…

Is it just poor design or obsficated design masquerading as poor design 😉

Am I falling into a “conspiracy theory” mind set?

Possibly but… It’s certainly two of several ways I would be looking into big time if I was in the NSA’s position, based on the assumption academia is catching up faster than they like.

also consider “generating entropy” and “active fault injection” at a distance are areas that academia appears to have neglected in favour of chasing the next “NIST Challenge” to fame and fortune…

And as the old jokes have it,

You can fool some of the people…

Being paranoid doesn’t mean thay aren’t out to get you…

Tom Van Vleck June 16, 2010 8:58 AM

@Nick P

I worked on a system called Multics. Hardware protection designed for the OS, language PL/I. The OS succeeded, shipped, and was used for many years. See multicians.org for documentation.

Then I worked on a never released OS for Tandem, proprietary hardware, language Ada. Not much is documented.

Third one was for Taligent, an Apple/IBM joint effort targeted for PowerPC, written in C++. One book published, system never released.

This margin is too small to contain the accumulated learning about strategies that work and don’t in OS development. 🙂

Nick P June 16, 2010 5:45 PM

@ Tom Van Vleck

Ahh, Multics. I commonly cite that as system design done right. Good architecture, proper combination of hardware and software, useful to users, and extremely reliable. It’s one of my favorite older systems. I wish we had more modern systems like it… that weren’t exclusively produced by IBM (System Z) or BAE Systems (XTS-400/STOP).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.