Schneier on Security
A blog covering security and security technology.
« Doping in Professional Sports |
| Friday Squid Blogging: Squid Costume »
November 1, 2012
Peter Neumann Profile
Really nice profile in the New York Times. It includes a discussion of the Clean Slate program:
Run by Dr. Howard Shrobe, an M.I.T. computer scientist who is now a Darpa program manager, the effort began with a premise: If the computer industry got a do-over, what should it do differently?
The program includes two separate but related efforts: Crash, for Clean-Slate Design of Resilient Adaptive Secure Hosts; and MRC, for Mission-Oriented Resilient Clouds. The idea is to reconsider computing entirely, from the silicon wafers on which circuits are etched to the application programs run by users, as well as services that are placing more private and personal data in remote data centers.
Clean Slate is financing research to explore how to design computer systems that are less vulnerable to computer intruders and recover more readily once securityis breached.
Posted on November 1, 2012 at 6:34 AM
• 29 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
The idea of requiring trust between components in a computer system seems tempting as a security matter, but my question is, who determines what is trusted? If code signing is involved, who controls the keys or determines which keys are valid? How does this not end up being a way for governments or corporations to control what you are allowed to do with your own computer?
I wonder whether "nipping in the bud the widespread sexism in the industry" will be part of the considered redesign. (Protection against threats of rape and assault is part of security, after all.)
My guess that in the core is human with all accidental and intentional actions indermining as a whole good technical ideas of Dr. Neumann.
As the final step after all good pure technical ideas implemented, concenration should be on providing embedded in the architecture ID10T/idiot - proof components. It is similar to job safety: as soon as you open protecting sceen, machine automatically stops rotor, then you do not warry about being injured by rotating part + 'If men were angels...' - you know the rest.
How much of this is stuff we already know how to do, but simply can't without disrupting trillions of dollars of legacy systems? A great deal seems to depend not so much on the desired final state but on whether there's a path from here to there.
Well, the first thing I would do is close the DMVs and instead open DCUs (Departments of Computer Usage) where you'll have to pass both a written and skills test before you're allowed to use anything connected to any network. And installing any "smiley pack" would result in an automatic 1-year suspension.
This is probably the case, but if we're ever going to get from 'here' to 'there', it may as well be driven by the DoD – after all, the US Navy already operates the world's largest private network. It seems logical to spend some time mapping out all of the interlocking design choices, and then expend resources migrating to a new architecture only once. Once that's done on a large enough scale, then perhaps a market for this more secure technology could open up to serve interested parties.
The project he has embarked on is partnered with the UK's Cambridge Labs with Robert Watson who is on the FreeBSD board of directors and developed Capsicum, which is a capabilities security system.
If you want to know more have a read of this paper from 2010 they co-authored,
but my question is, who determines what is trusted?
First thing is what do you mena by "trusted".
The ITSec definition is almost the polar opposite of the more normal human definition.
My experience this morning...
I was visiting my regular reading websites, alphabetically arranged...
RISKS-LIST: RISKS-FORUM Digest
Schneier on Security
and I see this posting on Peter G. Neumann (immediately after seeing his name on the preceding website), and after I've seen his name so many times over the past decades!
I started reading USENET's comp.risks.moderated soon after its initial founding in Aug '85, and have been doing so for the past 27 years as it migrated to the WWW's Risks Digest. I have found the contributions wide ranging, often making you wonder why don't people think things through before launching some system or service. Prof. Neumann's work in this area is foundational.
I would recommend to any computer architect or systems engineer that the Risks Digest (and archives) should be required reading.
How much of this is stuff we already know how to do, but simply can't without disrupting trillions of dollars of legacy systems?
We know how to do most of it, however for various reasons it has not been done in commodity OS's.
The CHERI RISC on FPGA hardware is a logical successor of the BERI tagged capability system which in turn has it's roots via TESLA and the Capsicum capability system you can currently run on FreeBSD or OpenBSD.
The two points of the CHERI system are to bring together two parts of a computer system that have not yet realy been brought together in a comercial system.
The two parts are the controled access to the paged virtual memory via the Memory Managment Unit (MMU) and the addition of hardware based capability tagging. Ttogether they are used to make a more secure "sandbox".
Thus these will still alow the bulk of existing legacy software to run on the system because the CPU in effect stays the same. The difference is that the capabilities are set by the executive (kernel) and if the application process goes outside of the capabilities it in effect raaises a priority exception via the equivalent of an NMI.
Whilst this does alow backwards compatability and greatly increased security it has limitations and potentialy could make things less secure if rogue code could fake up valid capabilities (something that only the executive should be able to change) and thereby get access to the MMU such that it can then break out of the sand box to get at the linear memory that is used not just by other applications but the executive as well.
The way I've looked at dowing it is for a minimum of a two CPU system, one of which is the general purpose computing core and the other being a hypervisor that could in fact be a reduced function state machine. The hypervisor not the generaal purpose CPU would control the configuration of the MMU and thus rouge code would have to some how corupt the functioning of not just the general purpose CPU but also the hypervisor as well and in such a way that it could get advantageous control of the MMU.
It seems to me that we're looking over a ledge. Right now, we control computers. You can do anything you want, within your ring. If you want more power, you can always go up a ring (if you have the passwords).
With talk of computer immune systems and such, we're going to start entering a realm where the limits on your power are owned by an application that no one fully controls. Very soon we will be losing control of our tools, as they start to act more and more alive.
Or if you look at it negatively... its just war on general purpose computing
Well no not realy, and actualy somewhat less than the current "trusted computer" ideas that Microsoft and Intel have put out.
The simple fact is our general purpose computing platforms have little or no security at the hardware level that is used (still the old OS ring 0 ring 1 model, although the IAx86 core can support four rings). Provided the rouge code can get at and change memory outside of it's own alloted virtual memory data space then it's effectivly game over. A capability system sits parallel to the usual system architecture and raises an exception should a process actuall try and access areas it should not. We have a very crude form of this currently but it's not realy used and lacks sufficient fined grained granularity to be anything other than a blunt instrument.
Have a read of the paper I linked to above.
In theory, all that is needed is a memory-protection unit (e.g System/360 storage keys) and a well-defined Supervisor/user mode with well-behaved switching. Trouble is, those facilities are managed by code, which is written by humans, who are managed by managers, who have been told by academics that it is a "Simple Matter Of Programming".
The CDC 6x00 had both protection and relocation (much better efficiency, although less so that page tables), and very separate Supervisor/User (in its original incarnation, only User code ran on the CPU). There was even a capability-based OS for it (CAL TSS). In the late 1960s, early 1970s. I had a lot of hope for it, but I'm still waiting for capabilities to become mainstream.
So I wrote a long response to this on reddit a few days ago. It is overly focused on certain points that irritated me.
One question I have is whether there is any way around the problem that any system that we try to secure will be attacked, to update the system to protect it we have to modify it, making it vulnerable (since it can be modified). Do we honestly think that ruling out the attacks we know about now will prevent new ones in the future? Much of the content of this blog seems to say in some contexts this approach to security fails (TSA).
One thing that I though was missing was the problem not with the hardware itself, but the systems of trust that can be manipulated, such as any certificate system.
A project to address fundamental failures that lead to compromised systems is useful, but probably should be balanced against strengthening other systems against attacks that result in economic loss by other means (eg fraud, phishing).
Its nice to see Bruce posting on him. Im excited about the Clean Slate effort and will continue to post progress on various projects if readers want. Does anyone know where the full list of Clean Slate projects is? I know Tolmachs group is also doing SAFE under Clean Slate.
It'll be fascinating to see the results of this.
To me, Capabilities are a massive fail - as demonstrated by Android. It's better than nothing, but there's still way too much gap between what a user *expects* a task to do and what the task is *permitted* to do.
And when you think about how processes could collaborate/interact via side-channels and wonder how anyone could analyse that... (or how they'd prevent side-channels...)
Capabilities are a massive success, done right. The LOCK platform used capabilities in an Orange Book A1 class product, as did KSOS. Many modern "secure" os products and designs use them. In all of these, they are very fine grained, the TCB is tiny, andenforcement mechanism is rigorously validated.
In Android, capabilities are very coarse grained, the platform TCB is huge/complex, and the security mechanisms are developed with low assurance methods. Hence, the results you see. OKL4's SecureIT & OK:Android are closer to how it will needto be done. That Sandia team porting Inferno to Android to reduce tcb is another fine example.
See, Neumann and others figured out how to solve plenty of our problems back in the day. Ex: covert /side channels see Kemmerer's and Wray's work. Thing is, modern people didn't learn all this stuff or read much work in academia showing solutions. So, they keep "inventing" "new" ways to "solve" solved problems instead of effectively using what we know already. Others don't care or try to use inherently risky tech in their designs (*cough* http/XML *cough*).
So, it's mostly a lack of wisdom in INFOSEC circles. There's apparently not a reliable transfer of lessons learned from old school & realistic academics to new generations of security engineers.
I'm not at all sure what to make of this initiative. There are already formal verification methods that will let you establish model equivalence. So any high level language description can be tested to the Spec and then you can prove that what is in the chip layout database is a binary logical equivalent to the original spec.
Unfortunately, this just means that hardware chip subversion attacks will focus on implementation stages beyond where logical equivalence can be tested. OR they will focus on analog structures rather then digital logic structures.
In modern sub micron chip designs (anything below 130nm) there are lots of structures added just to fill empty areas so that surface planerization is maintained. These "fill" structures are typically not in the logical layout database. Additionally the stage after this involves adding focus adjusting fills, called OPC structures. After this you have phase shifted masks, the list goes on and on, so at 20nm and below the actual photo masks have little in common with the original layout database. Additionally the actual masks contain a lot of fab specific "secret" information, which they will not even want the clients to really know.
The point of saying all this, is to show that the whole chip design production system is subvertable at points in the flow where logical equivalence cannot possibly be extracted. This all of course ignores the distinct possibility of side channel attack hardware being embedded onto the chip.
I'm always very suspicious of any proposals like this. Although I've not looked into what they're doing here in detail I find that usually they have very little understanding of how attacks happen in the real world.
Take code signing for example. Yes this does defend against some malicious actions, but in practice it does little to prevent malicious manipulation of already trusted (and thus properly signed) code. The real problem I find with most initiatives like this is that they tend to spend all their efforts defending against boogie men that don't exist while ignoring the ones that are actually hiding in the closet waiting to strike.
@ Michael Lynn
"Take code signing for example. Yes this does defend against some malicious actions, but in practice it does little to prevent malicious manipulation of already trusted (and thus properly signed) code. The real problem I find with most initiatives like this is that they tend to spend all their efforts defending against boogie men that don't exist while ignoring the ones that are actually hiding in the closet waiting to strike."
I've only read the proposals from three groups related to Clean Slate so far: SAFE project from Penn, TIARA from MIT, and the Cambrige work. All three make it easier for hardware to enforce controls on information at very granular levels. Two use processor-level modifications to apply type checks at the word level. One employs high level language & verification technology instead of purely, low-level systems programming. They're all designed with principles like POLA, clean interfaces, good abstraction, strong spec to implementation matching, etc. Even better, they each provide ways to give strong security enhancements for applications written by average coders.
So, are they securing against boogeymen? Well, attackers are STILL beating systems with implementation flaws related to memory management & foreign code execution. I'd say getting that under control by itself would be quite helpful. These programs are trying to go way beyond that. Still, they will be most effective combined with technologies for dealing with the other major threats, esp. low hanging fruit.
ne question I have is whether there is any way around the problem that....
Your question is not very clear at all so I've two choices guess at a meaning and answer it or try and answer all the alternatives I can see.
What I think you are saying is,
1, All systems are vulnerable.
2, To fix a vulnerability the system must be changed or modified.
3, Because a system is modifiable it is vulnerable.
Put simply all the above are true statments but the argument is both circular and incompleate.
As it stands it is true of all systems irespective of if they are computer based or not, it is a consiquence of free will and imperfect future knowledge.
To solve the issue fully you have to stop free will entirely and have perfect foresight, neither of which is possible or in fact desirable as existance as we know it could not exist.
In a much more restricted view point, on any general purpose computing or (universal Turing) engine within the limitations of it's system logic anything is possible including code that is self modifiable. Due to such a system being unbounded there are an infinite set of possabilities.
However there are consiquences to this,
1, Not all things are possible.
2, Of the infinite set of possibilities few if any are of use or desirable in any way to an external observer.
3, Many of these infinite possabilities will ever finish and it is not possible to predict with certainty which will and will not (halting problem).
4, Many of these infinite possabilities appear externaly from the observed results to be the same even though internaly they are not (consiquence of the halting problem).
Although there are many other consiquences those are sufficient to demonstrate that apart from trivial cases universal computing engines are not predictable (ie imperfect future knowledge) and that any given non trivial function can be achieved in what is in effect an infinite number of ways, including those that modify the way they behave with time.
And this "modify the way they behave with time" and "not halting" are the two most important functions of an operating system. That is they remain available at all times and will load code or applications as and when required. Likewise individual applications can have the same behaviour (think of a web browser).
So with further thought you will realise that a system cannot be both known to be secure and of much use in all but trivial cases (how much use and how trivial are actually quite important).
It actualy turns out that this was known as a mathmatical proof prior to Church & Turing publishing their respective ideas.
So if we know a system cannot be "known to be secure" does this mean we cannot have "secure computing" in theory no we can not, but in practice we can design systems that although not 100% secure can be sufficiently close to be quite usable or practical to implement.
How do we build such a system, well the same way that society does. That is we "trust" in the human sense that the computer will do as we want and will not "betray" us accidently or otherwise. Now as humans know trust can be breached and where this is important we put in place systems to reduce the probability of betrayal. That is we check the reliability of the individual in the past, we reduce their access to information and keep the information in controled environments. Furhter we check those entering and leaving such an environment do not have the ability to leak information from the controled environment, and we also watch those individuals for signs that they may be about to leak information.
That is we enforce a limitation on the capability an individual has to betray us and we watch them for certain signiture behaviour that indicates that they may be more likely to betray us.
Thus a capabilities system if properly implemented will stop many applications from getting access to resources that they might use to leak information. Thus stoping them from being able to leak information.
However there are applications that need to access resources that could be used to leak information as part of their normal operation. This means that a capabilities only solution will not work for such applications, so other mehtods have to be used. One is the equivalent of a "back ground check" in that we use formal methods to design the application such that it is designed and built in a way that reduces the possability that it will behave in a way that will leak information.
But there are still applications where the movment/manipulation of information is integral to their function and even though correct in their design can be used in a manner to leak information. This requires that the information be tagged in some way such that trying to use it in an unautherised manner will cause an exception to be raised with the executive and thus cause the action to be halted.
However even this will not prevent some forms of information leaks and other methods have to be employed. One of which is to observe the behaviour of the application at a much lower level and look for signs of abnormal behaviour. That is you look at the aplications functional signiture and check that it is expected and not unexpected in behaviour. However to do this effectivly you have to vastly reduce the complexity of the application or parts of the application such that a behavioural signiture is possible.
Ultimatly though it is possible for a very skilled user or maleware to tailor the behaviour such that a signiture system will not pick up the signs that a betrayal is about to happen or is actually in progress. Such things happen due to covert channels and although there are known ways to reduce the channel bandwidth it is not possible to detect them all and still have a functional and usable system.
The article begin nicely:
> “Everything should be made as simple as possible, but no simpler.”
And then follow the principle that there is no problem which cannot be solved by glueing another layer of complexity:
> every software object in the system to carry special information that describes its access rights on the computer
Maybe computer industry should have stick to this other stuff they've done in Palo Alto, Smalltalk - not because of the language itself but because of its principles: never ever write twice the same algorithm, total re-useability.
Meaning there is a single implementation of a "fifo" in the entire computer, and that one is tested.
Obviously, you cannot build a company writing software in that world; you cannot sell your pre-compiled and secret source software implementing your "fifo", the executable is the source code (you cannot hide) - the system itself manages different transformations of sources to executable (no Unix concept of source file, assembler file, link phase, libraries).
To sell a software you would have to rely on copyright laws, and those are totally ignored by everybody.
There is also something to be said of very nice hardware ideas/interfaces which are so complex and so buggy that the driver only works "most of the time", but those hardware implementing half the interface are so cheap everybody buy them, and that becomes the standard...
I have also been a long term reader of both the Risks List and Bruce's Blog (albeit not as long term as you), and agree that both are very informative and educational. One additional discovery from this article about PGN was his musical talents- perhaps Bruce could accompany him on the bongos sometime?
Alright, here's the link I wanted to post but couldn't due to my mobile device limitations. The link is a good summary of both how security requirements evolved & specific examples. Perrine makes the point, which I support, that they keep reinventing the wheel & not using what's proven to work. He gives specifics on systems like PSOS and KSOS, which Neumann also worked on.
;login: article on KSOS by Perrine
This other paper focuses on formal verification technology & discusses many projects. It includes modern one's along with old ones, comparing tech & effort. Since it was written, the L4.verified project completed & was discussed on this blog.
None of those designs would be an open computer that is fully controlled by the owner. They would not be computers in a basic sense anymore. Here as everywhere, security is the enemy of freedom.
@ Marc B,
Here as everywhere, security is the enemy of freedom.
Security is a tool, and like all tools it's agnostic to it's use, it's the user of the tool that decides it's use good or bad.
@ Marc B
"...as everywhere, security is the enemy of freedom"
Tell that to the people maintaining Tor, Freenet, email mixers, and... what the heck... "free" security software.
"None of those designs would be an open computer that is fully controlled by the owner."
I'm not sure which you are talking about: Cambrige or the one's I posted. Either set can be made pretty well owner controlled. The trick with both is making the mechanisms that define subjects & objects, load/interpret security policy, and enforce security policy as bulletproof as can be. Then, the user is in control from there: they load the software, define the security policy, etc.
In the old days, the user was even more in control in some situations. The high security systems had a "System Generation" requirement where the customer (or at least evaluator) generated the system on site from source code & configuration data. The result could be checked against a signature or analysed. This was to prevent subversion & certain MITM attacks. Most Linux desktops or servers, with source available, aren't even done this way.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.