Should the Government Stop Outsourcing Code Development?

Information technology is increasingly everywhere, and it’s the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping. The same networking technologies are used in every country. The same digital infrastructure underpins the small and the large, the important and the trivial, the local and the global; the same vendors, the same standards, the same protocols, the same applications.

With all of this sameness, you’d think these technologies would be designed to the highest security standard, but they’re not. They’re designed to the lowest or, at best, somewhere in the middle. They’re designed sloppily, in an ad hoc manner, with efficiency in mind. Security is a requirement, more or less, but it’s a secondary priority. It’s far less important than functionality, and security is what gets compromised when schedules get tight.

Should the government—ours, someone else’s?—stop outsourcing code development? That’s the wrong question to ask. Code isn’t magically more secure when it’s written by someone who receives a government paycheck than when it’s written by someone who receives a corporate paycheck. It’s not magically less secure when it’s written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn’t even a viable option anymore; we’re all stuck with software written by who-knows-whom in who-knows-which-country. And we need to figure out how to get security from that.

The traditional solution has been defense in depth: layering one mediocre security measure on top of another mediocre security measure. So we have the security embedded in our operating system and applications software, the security embedded in our networking protocols, and our additional security products such as antivirus and firewalls. We hope that whatever security flaws—either found and exploited, or deliberately inserted—there are in one layer are counteracted by the security in another layer, and that when they’re not, we can patch our systems quickly enough to avoid serious long-term damage. That is a lousy solution when you think about it, but we’ve been more-or-less managing with it so far.

Bringing all software—and hardware, I suppose—development in-house under some misconception that proximity equals security is not a better solution. What we need is to improve the software development process, so we can have some assurance that our software is secure—regardless of what coder, employed by what company, and living in what country, writes it. The key word here is “assurance.”

Assurance is less about developing new security techniques than about using the ones we already have. It’s all the things described in books on secure coding practices. It’s what Microsoft is trying to do with its Security Development Lifecycle. It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it fields a piece of avionics software. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems. But most of the time, we don’t care; commercial software, as insecure as it is, is good enough for most purposes.

Assurance is expensive, in terms of money and time, for both the process and the documentation. But the NSA needs assurance for critical military systems and Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be more common in government IT contracts.

The software used to run our critical infrastructure—government, corporate, everything—isn’t very secure, and there’s no hope of fixing it anytime soon. Assurance is really our only option to improve this, but it’s expensive and the market doesn’t care. Government has to step in and spend the money where its requirements demand it, and then we’ll all benefit when we buy the same software.

This essay first appeared in Information Security, as the second part of a point-counterpoint with Marcus Ranum. You can read Marcus’s essay there as well.

Posted on March 31, 2010 at 6:54 AM57 Comments

Comments

John March 31, 2010 7:10 AM

Security and reliability have to be designed in. Many commercial applications now run on an operating system that was cobbled together which it’s owner has been trying to retrospectively fix in recent years.

There are alternative platforms that where designed to be robust and secure but are not fashionable. Software architecture is not fashionable but is necessary if you want secure reliable and long lived systems.

spencer p March 31, 2010 7:28 AM

I believe the benefit, the largest one, is the closer to home – within the country, within the state or within the agency, will provide better support.

If say, Iraq wrote our software, it’d be harder to litigate to get support or even the code than maybe England. If it was done by Microsoft, the court order is much easier and so on and so on.

Perhaps secure doesn’t mean the technical security but some sort of social security.

szigi March 31, 2010 7:30 AM

You cannot practice medicine, practice law, build a bridge, drive a car if you are not qualified and certified.

Why is it, that we frequently see people designing and writing code for more or less critical systems without any proof that he is actually fit for this job?

jeez March 31, 2010 7:44 AM

Software is not like medicine, law or building of bridges. You can’t simply say “we should write more secure software” because that ignores the fact that it costs tens or HUNDREDS of times as much to even try to do that, and no one is willing to pay for it.

Modern software programs are like machines with two million moving parts. Imagine if I gave you a machine with two million moving parts in it and you said to me “prove that none of these parts can break down for at least 5 years”. I would probably just laugh at you. Well, I laugh at the people who say “we should all just write better software” because it is never going to happen.

JohnN March 31, 2010 7:54 AM

@jeez –

That type of analysis goes on all the time with MTTF calculations for complex systems. They don’t prove that none of them will break down, but they can be used to make statistically valid predictions about when something will break down, and the analysis identifies the weakest, most vulnerable points in the system.

David Harris March 31, 2010 8:15 AM

Hey Bruce, good article.

I think the only aspect you left out was corruption/incompetence. e.g.:

“By and large if we demand (and offer to pay for) higher-quality software, we still won’t get it. We’ll usually just spend 10 times as much money. Examples abound. And while this would improve over time if we were consistently and absolutely brutal in our response to getting poorly-made software, we as a society don’t actually hold people responsible for such things. So we need to do that too.”

You might think “and when you receive blatantly broken and insecure software despite having agreed to paying for assurance, punish those who provided it” goes without saying, but I think it bears repeating.

Thanks,

Dave

Todd March 31, 2010 8:21 AM

@jeez

Perhaps software is a bit like building a bridge. The problem is, most software is that log tossed over the creek. Barely sufficient and hope you don’t fall off. Building a proper (DOT approved!) bridge is, as you say, hundreds of times more expensive. As Bruce says, Boeing and the Feds pay that bill (sometimes!).

Very mature software packages (Windows x? Word 21xx?) might begin to resemble that bridge, but accidentally and with some questionable foundation elements.

I think there’s a young science here, already hatched but waiting to catch enough mainstream attention, research and practice to begin to develop into something that is both useful and affordable.

Wayne March 31, 2010 8:53 AM

Is there a link to find all these point-counterpoint essays in one central authoritative place? Interesting to see Bruce and Marcus duking it out!

AlanS March 31, 2010 9:08 AM

Off-topic but likely to be of interest:

GCHQ: Cracking the Code
http://www.bbc.co.uk/programmes/b00rmssw

“The BBC’s Security Correspondent Gordon Corera gains exclusive access to Britain’s ultra secret listening station where super computers monitor the world’s communications traffic and Britain’s global eavesdropping and electronic surveillance operations are conducted”

John March 31, 2010 9:16 AM

@JohnN, Todd, et al

Right now I have to partially agree with jeez. Not because of the 2 million parts argument, but because my opinion at this time is that software development is a craft discipline and not an engineering discipline. Let me explain what I consider an engineering discipline is to show what I mean. In an engineering discipline, we can prior to the actual construction of the object determine to a high level of assurance that the object will actually serve its purpose and is practical. We could also determine if the project can’t be constructed and identify in what areas our technologies have short comings and by how much.

For instance, it is currently possible to design an orbital “beanstalk” or elevator that is capable of lifting a load from Earth’s surface to past geosynchronous orbit. We know what the tensile strength of the material to construct such an object needs to be. We can compute the required size and dimensions for such a project and so forth. If we develop materials with a greater tensile strength, we can easily calculate the changes to the required design. And all of this can be done prior to starting construction. Same principles apply to the construction of bridges, buildings, etc.

That is an engineering discipline.

However, software development is unfortunately still very much a craft or art. Let’s say I wish to create an artificial intelligence with average human level reasoning capability. OK, I’ve stated my problem. Now can you tell me in what areas our current technology falls short in terms of reaching that goal? And for those areas that fall short, by how much they need to improve?

I somehow doubt those two questions can be answered at this time. If software development was an engineering discipline, they could be.

We’re striving towards such a goal, but at this time we’re not even sure how far away that goal is. But until it’s an engineering discipline, those 2 million parts need to be hand crafted and understood. Only recently has there been a proven correct OS kernel, the sel4. And that kernel was only proven correct around the beginning of this year. And it took over 5 years to prove a mere 7,500 lines of code correct. Never mind the formal proof itself took over 200,000 lines. That kernel is an amazing project. But the magnitude of the effort to prove a mere 7,500 lines correct merely underlines and demonstrates that we have a long ways to go.

Matt March 31, 2010 9:22 AM

Chrome OS will fix a lot of the problems on the client side. Even today, browser extensions that open PDF and DOC via a web applications like Google docs shut down a huge attack vector. We’re not there yet, but soon you can finally kick Acrobat and Word to the curb.

The future of computing is more secure and costs less, not more.

My sense is that we’re really on the cusp of making real progress. The app store model coupled with solid TPM will deal a crippling blow to malware.

Clive Robinson March 31, 2010 9:39 AM

@ Bruce,

“What we need is to improve the software development process,”

Yes and to do that we need to look in a number of areas and stop building castels on clouds.

The first thing we need is a “quality” like process framework for “security” that is in place before even the first thoughts on a new system are written down (yes I know “framework” is a dirty word in the US).

The second thing is we need to understand and agree to what we are talking about.

Otherwise,

“so we can have some assurance that our software is secure”

Is an empty and meaningless statment. To see why ask the question,

Assurance of what exactly?

And then the question

How do we verify it?

Which means we have to have something to measure in a meaningfull way. Currently the problem with the industry is due to “no decent metrics” we have “best practice”…

All of which just gives pleanty of room for arm waving, but nothing that the “Newtonian scientific method” can be applied to.

You need to be able to specify what it is you want “Assurance” for in a way that can be independently measured and verified.

Because meaningfull security metrics don’t exist in any meaningful way, we are like 1000 year old European castle builders, we know a “motte” will slow people down on the way to the bailey but we don’t know how to build them most effectivly for a given purpose.

Thus by using the wrong methods of construction we end up with a bailey that looks strong but is actualy increadibly weak due to the poor footings provided by a badly built motte.

Many people may agree with,

“The key word here is ‘assurance .'”

But the simple fact is “assurance” is a word that means little or nothing not just with regards to systems security, but systems reliability and systems availablity as well.

Likewise it means at best only marginaly more when you talk about firm ware or micro code used within the systems at the lowest levels.

We realy need to get to the “bed rock” of the issues and take a very very serious look at the failings of the architectures on which we design our systems on.

The Von Neumann architecture might be great for single CPU resource limited systems, but it sure sucks for “Availability” in it’s various forms.

It is not even particularly usefull for embedded control systems these days as verious Harvard architecture microprocessor systems have shown.

Perhaps from the “availability” asspect the von Neumann architecture needs consigning to the “dustbin of history”.

Ever since the late 1970’s we have known that “single CPU systems” are an evolutionary cul de sacs or dead ends for general purpose computing. Even the first IBM PC’s had two CPU’s in the box and they got the idea from even earlier Apple designs (ie Microsoft CPM card for Apple II).

When we start thinking constructivly about “assurance” for processor architectures we might start making headway on the foundations the software eventualy rests on…

kangaroo March 31, 2010 10:10 AM

Ok, this is just plain terrible: “Code isn’t magically more secure when it’s written by someone who receives a government paycheck than when it’s written by someone who receives a corporate paycheck. It’s not magically less secure when it’s written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn’t even a viable option anymore; we’re all stuck with software written by who-knows-whom in who-knows-which-country.”

What a straw man. No one ever argued that some magical process increases security with in-house development!

No, the question is whether a commitment to the organization and product motivates folks to do a better job. You can trivially show that someone who actually BELONGS to the organization has more of a commitment to the organization. That a temp is more likely to cut corners than someone who is a lifelong bureaucrat, or has their health insurance tied in to the organization.

WORST COLUMN EVAH. That’s just lazy — and it makes me think poorly of “Information Security” that they let you get away with that; what, no editors in that esteemed periodicals? Just spellcheckers?

Nobody March 31, 2010 10:27 AM

One obvious solution is to simplify.

The reason that the software in a plane, or the ABS in your car works is that it does one thing – not that it’s written by experts or written on a provably correct platform.

Commercial OSes have to add new features to persuade you to upgrade, but why are we using general purpose OSes on office computers?

In most organisations people need to lookup database records, read/write email and read/write documents. Why do they have a machine with any local storage at all, let alone one that can install and run arbitrary programs?

Instead of having security policies and firewalls in place to stop users connecting to itunes or clicking on ad-banners, why not have limited browsers running on limited OSes that simply can’t do that?

If only there was some sort of system that you could reconfigure to only have minimal functionality, perhaps one where you could even examine the source code yourself. And suppose this was available to governments and researchers to use – for free even.

Tordr March 31, 2010 11:48 AM

I feel that the essay mixes buying people who develop code with licences.

Most software sold is not sold on an individual basis, where each program is specially designed. Most software is sold as licences. That means you have just bought the right to run that software on one computer. You do not get the right to see the source code, you are not allowed to see the bug reports.

What you get is a black box with the vendors stamp on it saying: “It runs fine in our test lab, and if it crashes it is your fault”. No wonder the vendor does not spend any more money on security than strictly necessary.

Kyle Wilson March 31, 2010 12:45 PM

I think that the biggest ‘driver’ here is that software products tend to expand to fill the available developer resources. If your ability to code increases by an order of magnitude (say) and your competitor produces a product with ten times the cool features while you spend the resources that the new tools make available building a more robust application, your competitor will (generally) kill you in the market. Security and internal robustness (past a relatively low threshold) aren’t immediately obvious and don’t move product. I’m afraid that this is likely to continue to be the case unless something changes radically in the software market. Everyone will produce products that just push the edge of what available tools with reasonable resources make possible and as a result, the less visible aspects of software products will always fall seriously short of the ideal.

David Andersen March 31, 2010 1:28 PM

Bruce, I think the article conflates several issues of software development. In particular, overall software quality (including security and assurance) and the question of deliberately introduced backdoors. Going back to the days of Thompson’s Reflections on Trusting Trust, it’s been clear that detecting maliciously and cleverly introduced backdoors is a very hard game. The question to me is: Can we engineer systems where the TCB is small and robust enough to be fully verified (and/or developed by developers trusted to introduce only bugs, not to maliciously hide them), and within this system safely incorporate less trusted code. We’re not going to have a formally verified Windows OS any time soon – let’s take a practical path.

Corey Mutter March 31, 2010 1:31 PM

There is an instance where drawing a government paycheck does make things better: market failures.

The open market can’t provide secure software, because it’s not signalled well in the end product, so nobody will pay more for security.

The market rewards quickly-developed, barely-functional software disproportionately well, so a free-market company cannot decide to develop secure software, they won’t survive.

The government needn’t turn a profit on its software, so it can make the decision to focus on security.

DC March 31, 2010 1:35 PM

@kangaroo

Epic fail. As a lifelong consultant writing embedded code — it’s quite possible that an outside contractor can be much more loyal and interested in a company’s success than an employee of same, as well as much more competent. My customers all say so anyway. And it’s true, and there are some decent reasons it’s true.

One is that one serious (or not even serious) lapse and you’re out of there as a consultant — no liabilities for the company as you can’t claim any breach of employment law, where an employee may not be sacked. So you listen hard and do what’s wanted by the customer that much harder if you value your income. You give them what they really need along with what they ask for if they don’t quite know what they should be asking for as well. At least a good, successful consultant does.

An employee may labor under the (correct) suspicion that even if they do really well, someone else will take the credit. We never had that problem, when we did a winning product, everyone in the company(s) we worked for knew it, and we were rewarded as such — in other words, our loyalty and dedication was repaid in kind, and in cash, not to mention favorable consideration for future work and a stipend when there was nothing to do so we’d be motivated to stay ready to do new work and support the older stuff.

In this case, the topic was primarily high 9’s opsys for embedded uses, and digital signal processing, which our customers knew we were superior to their in house talent at — no question we were miles ahead, and part of our pay was for helping them learn.

Unlike a grunt employee, we always deal and dealt right at the top — CEO or owner level. Cuts the crap considerably. We’ve even been project leaders who tasked employees of the companies we’ve worked for to show their erstwhile managers how to get these slugs to produce worthwhile product. Getting to work with us instead of their normal boss was often a fought-over perq…

Just saying — it may be like that where you are — so, run! It’s not like that everywhere. And if you really “have the stuff” it’s not like you worry much — customers fight for your time, not the other way around.

AppSec March 31, 2010 1:58 PM

@John:
What is Average Human Reasoning Capability? Maybe that’s why your software hasn’t been definied yet 🙂

Designing software is not any more of an art than designing a building. Technology changes, principles do not.

Brandioch Conner March 31, 2010 2:06 PM

I don’t think you phrased that question as elegantly as possible, Bruce.

It isn’t whether the government should license code from other people or not.

It’s what code the government should be licensing from other people.

Would you buy a lock for your door that also had an integrated door bell? And mail slot? Clock? Barometer?

When the government licenses Commercial Off The Shelf software (COTS) that is what it is licensing.

Focus on software that does the absolute minimum required to clearly defined requirements and then check that code.

BF Skinner March 31, 2010 2:51 PM

@Kangaroo “No one ever argued that some magical process increases security with in-house development!”

To you. I’ve heard the argument repeatedly. Oddly it is often made by programmers who’s jobs are being off shored.

This doesn’t cover the risk, though, that foreign nation security services could infiltrate their own programmers (or suborn in place people) to plant back doors in widely used source code.

I’m not a beliver that ‘if they’re foreign they can’t be trusted’. God knows why after how they were treated but the Filipino people were loyal to the US over the Japanese in WWII, the Montanyards stayed with the US over the VC. (And Clive does England still maintain Ghurka brigades?)

Trust is an independent variable from nationality/employment.

bob March 31, 2010 3:02 PM

@John at March 31, 2010 9:16 AM
“In an engineering discipline, we can prior to the actual construction of the object determine to a high level of assurance that the object will actually serve its purpose and is practical.”

The only reason we can do that prior to construction is because the materials have been characterized. We have millenia of experience with many of them. If we had to start with new and different materials every time, we’d be wondering why building wasn’t an engineering discipline.

What are the materials used to build software? Abstract bits. Not even concrete bits, which would be electrical and electronic signals occurring in physical materials, and which clearly do involve engineering.

If you don’t thing the bits are abstract, then they wouldn’t be able to change form (e.g. electronic to magnetic and back) or be transferrable intact to different media (high-density barcodes printed on paper). In short, software is information acted upon in a particular way using physical materials, but it’s still fundamentally information, not the physical materials it’s expressed in.

When you’re dealing in information, there is nothing like the safety margin there is in materials. Consider a mathematical proof with only one tiny flaw in the reasoning. That flaw could topple the entire proof. Most building materials simply don’t work that way. Or the flaw could be inconsequential. Same thing applies to the information in a DNA sequence: a change may be fatal or inconsequential to the organism, or even confer some direct or latent benefit. The only reason that works is because the process is massively parallel, and abject failures remove themselves by perishing.

Your AI example is a red herring. We don’t know enough about what constitutes human reasoning to be able to express it in any form whatsoever. In short, we don’t know how to make a brain, in any material or software. That’s at least as much the barrier to an engineered intelligence as anything else.

There is an aspect to software that involves engineering. If there weren’t, there wouldn’t be different computer languages. Structured programming, object-oriented programming, and lots of other things were essentially designed as “engineered materials” to be more reliable ways of expressing the desired result. They are the stuff which more reliable software is made on.

Nick P March 31, 2010 3:24 PM

@ Bruce

As I’ve been ranting on about increased assurance systems, it’s nice to see you mention it. I have to say there are a few counterpoints, though. For one, where your software is written does matter if you’re a government or corporation protecting high value assets. An American working for an American company isn’t inherently better, but they are much less likely to subvert or disclose technology to Chinese firms than, say, a Chinese company or employee. The tremendous amount of espionage and subversion in places like China and Russia refutes your unconditional argument. Unless we use a process like EAL7 (>EXPENSIVE<), we can’t really have our enemies build the software without worrying about what they did to it. So, we must consider location as a risk and deal with it like any other.

On Assurance

Now is the best time to get the assurance going. Several decades of research have shown exactly what it takes: unambiguous requirements written by domain experts; careful attention to risk throughout lifecycle; a specification that corresponds strongly to requirements; implementation that corresponds strongly to specs; much testing/modeling; independent, thorough review. All security-critical (EAL5+) and safety-critical (i.e. DO-178B) software is written like this. These rarely fail. The question is, “Can it be made cost-effective?”

There are some considerations here. For one, we have better tool and language support for this than ever. For instance, Praxis’s Correct by Construction methodology produces software with few enough defects that they warranty it. Cleanroom has been used for years by many companies and accomplishes the same thing without the developer ever even executing his/her code (!). Using some formal methods at requirements/design stages has been shown to catch serious bugs early. Static analysis tools spot trouble spots. Languages/subsets designed for high integrity, such as SPARK Ada or OCaml, would eliminate large classes of errors overnight while being productive & fast. Modern theorem prover tricks, like Coq’s autogenerating Ocaml & seL4’s HOL equivalent of C statements, help in the highest assurance cases with significant cost savings. Finally, all the tools supporting standards like DO-178B, like Perfect Developer & Esterel, make integrating assurance easier than ever.

On Government

You’re absolutely right that the government must lead the way. They must demand assurance and then pay for it. If they don’t, companies won’t build it. Alas, I see problems here. The government demanded Orange Book A1, verified systems for critical assets, then went and bought the equivalents of SELinux and Windows in mass, just buying a few high assurance systems to act as guards. Companies spent millions and then had nothing to show for it, such as Honeywell selling only 35,000 SCOMP systems. So, I think long-standing contractors will have serious reservations about taking such risks again. They know the government will cave in and go for whats cheaper or has more functionality. There is some hope in the recent push for EAL6+ separation kernels: government mandated it, helped develop them, certified them, and spent lots of money buying them. If the government does more of this, we will see a flood of medium assurance products hit the market. Encouraging open-source, med-high assurance would help as well.

jeez March 31, 2010 3:36 PM

@bob:

Yes, you’re onto something there. This is sort of what I was trying to get at with my “two million moving parts” hyperbole: When you make a building out of concrete and steel, we know a lot about the properties of concrete and steel and can predict with pretty good accuracy how they will behave. And then slap on a big safety margin, just to make sure.

But when we build software, we are constantly inventing new libraries, new modules, new components, new and then building software products out of those. Even worse, software designs are not constrained by the “three dimensions” of physical reality, or by well-understood forces such as gravity etc. So software can get arbitrarily big and complex, and the main limitation on this complexity is how much humans can understand and juggle in their head at one time. But we’ve learned how to break a software application into a whole bunch of pieces and let each programmer worry about only one piece at a time, with the result that the combined software product at the end can be vastly more complicated than any one programmer can understand. (I submit for your consideration, Microsoft Windows). Anyone who has worked on multi-million line codebases has eventually come to the realization that there’s a lot of stuff in their codebase that they will never have time to learn about in detail. Even if you are highly familiar with (or an expert with) some parts of the codebase, every so often you will stumble on something you’ve never seen before and go ‘WTF?’ For me this happens about once a week…

Anyway, I agree with the comments above about software currently being a “craft” rather than an engineering discipline, but I really don’t think this is going to change any time soon. People have been lamenting the same problems about software construction for over 50 years now, and its not much closer to being an engineering discipline than it was in the 80’s. The industry goes through major fads (“agile” being a recent one, before that I would say “OOP” was a big one) but it doesn’t really learn much from its failures.

Our ambitions of what we want software to do, and the hardware resources to do it with, continue to grow in leaps and bounds. We actually have gotten better at writing software in the past few decades, but not enough better to keep up with our ambitions.

Maybe if you work on safety-critical software (e.g. embedded systems that control the anti-lock brakes in cars) then you strive to keep your system as simple as possible and have as few features and responsibilities as possible. But the vast majority of software is not written with security or correctness as the overriding most important “feature”, but then we end up using it in hospitals and factories and traffic control systems anyway.

Sorry if I seemed overly pessimistic about this whole issue. But I really think things are going to get a lot worse before they ever start to get significantly better. Eventually we will have so much computing power available, that computer-assisted techniques for designing software will be much more powerful (i.e. you tell your computer what you want the software to do, and it writes it for you). Compare it to modern chip design, for example. As long as us fallible humans are writing the software ourselves, its quality is going to range between “awful” and “okay, but not perfect”.

Nick P March 31, 2010 3:39 PM

@ Matt

“the app store model with solid TPM will deal a crippling blow to malware”

I don’t think so. You have to look at the attack vectors and mechanisms. For one, it just moves it to web service providers. I don’t know if you’ve noticed, but most web stacks are much less security-savvy than, say, a Linux or BSD OS. It will reduce the risk of local malware, but all vectors that hit servers, scripts in the documents, poor parsing, etc. will still work. Additionally, the biggest sources of malware today are social engineering over email and drive-by downloads. Since Chrome is still a complex browser and Linux kernel, it only reduces risk: there’s still plenty of ability for botnets to hit. TPM has the ability to neglect this, IF users don’t override it thinking it’s buggy like most DRM. If user’s can’t override it and make dumb decisions, then TPM will have dealt a crippling blow to liberty. Capability-based microkernel OS’s, hardware-assisted virtualization, TPM, signed apps, and at least medium assurance security in core apps is the best combo at best that let’s us reuse much legacy functionality. You can get this from defense contractors for big $$$, but we need an OEM preloading a cheap version of the same thing on all machines for real exposure.

@ John

Yes, it will take engineering. I don’t think seL4 is a good example here, though. Cleanroom and Correct by Construction are engineering approaches that are much more practical and easy to learn as far as “normal” code cutters are concerned. Low level C, modelling said C in HOL, writting HOL proofs, verifying HOL proofs, etc. I just don’t see most people doing that. Besides, although I praise L4.verified’s efforts, they haven’t released the proofs or the code for peer-review. I’ve criticized Heisner and company for this, but all we have is their team’s word. Better examples in this area, due to field use and independent review, are Green Hills INTEGRITY-178B, Aesec’s GEMSOS platform, and Galois’s trusted block access controller for MLS wiki’s. They used formal methods, extensive testing, pentesting by NSA, and have actual products that have been running for a decade in high risk environments (except Galois’s recent BAC). As soon as I get seL4 proofs and validation, I’ll list them as best in class.

Craig March 31, 2010 3:40 PM

Trust and security assurance are the ultimate goals, although with a multitude of human motives, you could say impossible to achieve, in house or outsourced.

First Time March 31, 2010 3:47 PM

@Bob

I think you’re looking too small. Bit are to software as molecules are to building construction.

Beams, girders, etc are to buildings what functions are to software.

If I might select an example of what I think John was getting at:

Input functions are well understood, and ways to make them secure are also well understood. But, we still continue to be plagued with buffer overruns, heap overflows, etc.

The difference between a good input routine and a bad one is in the “craft” of the coder.

When input routines have attained the level of confidence that, say, a wooden beam or a steel girder has, then we are approaching the point where software has become an engineering discipline.

I’m picking a narrow example for clarity, but I think the prinicple extends. As long as we continue to reinvent the wheel for every software project, and start everything from scratch, human error will continue to introduce bugs, and the number of bugs will vary as the skill of the coder varies.

We can apply “software engineering principles” and “best practicies” to try and apply a scale factor to the “skill -vs- bugs” equation, but that’s all we’re doing.

And, further to John’s point, we can imagine a building that we cannot build yet, but we can clearly visualize what technology we need in order to be able to build it.

However, it is possible to imagine a piece of software where we could not predict reliably the number of bugs, where they might be, what their effect might be.

The hand-crafted nature of current coding practices is a limiting factor on the journey towards an engineering discipline.

I’m not sure whatt he solution is, but I know in my younger days when I was coding, I tried to re-use as much of my code as I could. This was partly from a desire to not do unneccessary work, but mostly because it had already been debugged, which was always harder than writing it in the first place.

Clive Robinson March 31, 2010 4:01 PM

@ BF Skinner,

“And Clive does England still maintain Ghurka brigades?”

Yes the “United Kingdom” does indead maintai Ghurka Regiments.

And they are very very loyal even though our “Minister’s of Defence” treat them very very shabily, to the point of making it a National Disgrace.

They Minister’s also look for any and every reason to critisise those who stand up for Ghurka soldiers right’s as was seen by the disgracfull comments of a junior minister just the other day, which shows how moraly bankrupt most of them are.

This behaviour shames me and many other’s who are citizen’s of the UK especialy those who have served their time in the Armed Forces and have had contact with the Ghurka’s.

David March 31, 2010 4:27 PM

There’s another difference between designing a bridge fit for use and secure software: the nature of the threats.

Bridges are designed to withstand certain known and projected stresses with a certain margin of safety. This isn’t always done correctly (I live within a couple miles of the 35W bridge), but it’s understood. I don’t know of any bridge specs that specified how much explosive it had to withstand.

Security software has to withstand arbitrary attacks, including attacks nobody had thought of when it was designed. That is a qualitatively different problem, and much, much harder.

Brian March 31, 2010 4:49 PM

Sadly, this is something I battle with when coding. We are building an application that deals with big dollar figures for large entities. My thought is that it should be as secure as I can make it. It started with security built in, and well enforced. As things went on, we had users give feedback. The only thing they didn’t want was security. They didn’t even like having to use a password. All security controls have been removed. Its infuriating to me, since I know how vulnerable this software is because of it. And I hate the argument of “I was just doing what the owner wanted”. Time to find some place that actually respects even base levels of security…

Rob Lewis March 31, 2010 4:51 PM

@Nick Pat,

Hate to harp on an fellow IA ranter, especially since you make such a good case.

If you recall our conversation in December, you said there would be some value in an “injectable” technology that raised assurance levels of existing COTS operating systems. In January we received our DIACAP scorecard of Mac 1, Classified and an IA rating of medium-high assurance. Once they understand the technology better they may lean towards high. The Red Team are probably still scratching their heads and wondering why they could not breach standard COTS systems with identified vulnerabilities.

So while all of the assurance tools you describe remain necessary and should be used, the economic climate of the day will remain a huge barrier to providing the incentives needed to drive these correct actions because they are worse now than when governments cheapened out before.

I propose that what we bring to the table IS economically viable, for instead of applying your sound principles and EAL standards to millions of lines of code from the OS and up the entire application stack, apply them to the 10,000 lines of code that are manually verifiable that we provide in our sub-system. You don’t think that sounds much more affordable and expedient?

Clive Robinson March 31, 2010 4:56 PM

As a first step on the process people need to understand a couple of things,

1, First of a closed system such as a single CPU machine cannot defend it’s self from malware.

2, Malware comes in two basic flavours,

2A, Code that is inserted into a running system without the permission of the system owner.

2B, Code that uses defects that are already in the system via the actions of the system owner.

3, In a common memory model CPU it is not possible to stop code being put into the system either as 2A or 2B.

Even if you do the formal proof’s they are only good for what you can think of. Which means there is always the possability for an attacker to do an “end run” around the proof’s by doing something that has not been thought of by the system designers.

The potential for this in a “common memory” model such as the von Neumann architecture is considerably greater than in seperate memory models such as the Harvard Architecture.

With a strict Harvard architecture “data is data” and “code is code”.

Data can effect the way the code functions but it cannot add code. So attacks 2A and 2B are difficult and only possible if there is a way to get at code instructions via the data modifing the program counter (google [“Harvard architecture” gadgets]).

The problem with the strict Harvard architecture is “on it’s own” it cannot have a traditional OS due to the issues of “loading code”. With a little thought (hint second CPU) you can see how easily this can be resolved.

Another argument that is put forward against the strict Harvard architecture is “it does not support C”.

To which I say “so what” C is not the only programing language out there. It might be relativly simple and in many respects “powerfull” but it is also extreamly unsafe.

C is a language of the von Neumann architecture. Trying to make it run on any other architecture is like trying to run your diessel car on petrol. You can sort of make it work but why go to all the bother?

Code is a subset of data which is a subset of information. As has recently been shown data of almost any form can have code hidden within it.

It can be shown (Turing’s halting problem) that a CPU cannot tell in advance if it has “valid code” (that halts) or “invalid code” (that does not halt). From this you can show that a CPU cannot reliably detect or deal with malware. This is true of all single CPU machines.

However now consider the case of a general purpose (Turing Complete) computing engine under the control of a second engine (that is not Turing Complete) such as a minimal state machine. It can look at the I/O behaviour of the first to determin if what it is doing is “within bounds” or not.

Because this second engine is not influanced by the data the first engine is processing an attacker cannot attack it from the first CPU (other than as a DoS attack).

Whilst this does not preclude certain types of attack (covert side channels) these can be dealt with in other ways.

This sort of design alows not just seperation of data and code, it also makes formal methods considerably easier.

Personaly I think it is time we investigated this sort of “secure by design” hardware.

Simply for no other reason than “code cutter time is expensive” and “code cutters have better things to be doing than wasting time worrying about secure coding”.

John March 31, 2010 5:02 PM

@Bob Your AI example is a red herring. We don’t know enough about what constitutes human reasoning to be able to express it in any form whatsoever.

And your point is….?
What areas of research are required to understand human reasoning?
By how much does our understanding of human reasoning need to improve?

Yes, programming deals with the manipulation of information. The various languages and programming out there are attempts to increase the reliability and size of a project that a person can understand and manage. But they don’t fundamentally change how programming is done. Actually, let me correct myself. Programming isn’t the manipulation of information. Programming is the manipulation of entities that in turn manipulate information. Programmers don’t solve problems. They solve meta problems.

Let me describe what I would consider to be the ultimate “last program ever written”.

What I’m envisioning is a system where you walk up to it and it asks you “What would you like to do?” And you describe to it the problem you want solved. The system takes your description and breaks it down into sub problems. If it encounters a sub problem it doesn’t understand, it asks you about that sub problem and has you describe how to solve it. If you’ve given it an ambiguous description, it asks you questions until the ambiguity is resolved. This process repeats until eventually your original problem is solved. And of course, this system would remember the solutions to all the other problems that it was involved in solving so that those previous problems could be used as part of the solution to new problems. In many ways such a system would eliminate programming as an occupation. Reinventing the wheel would be a thing of the past since the system would reuse any wheels it was involved in creating.

Programmers would be a thing of the past. Or would they? Seems to me that what would happen is that the occupation of programmers would instead be replaced by people who are very good at understanding and explaining solutions to problems. And that to me sounds like a programmer. A programmer working at a much higher level of abstraction than what they currently do. But a programmer none the less.

Pat Cahalan March 31, 2010 8:00 PM

I’m a little disappointed in Marcus on this one, I was expecting a much more thorough and depressing writeup. Not that this is bad, just not his best work 🙂

I think one of the problems we have is that people gloss over the assumptions of the languages they use, particularly when it comes to security.

I’m going to borrow a chunk from Erickson’s “Hacking” ->

“C is a high-level programming language, but it assumes that the programmer is responsible for data integrity. If this responsibility were shifted over to the compiler, the resulting binaries would be significantly slower, due to integrity checks on every variable. Also, this would remove a significant level of control from the programmer and significantly* complicate the language.

(* “significantly” added by me for emphasis)

When you’re talking about security at the code level, you’re talking about tradeoffs that are implicit (in many cases) in the actual construction of the language. If the coder doesn’t know or care about those tradeoffs, you get really insecure software.

Of course, you can engineer the language to be more secure, thus reducing the burden on the programmer, but you’re going to wind up with more bloated software that runs like crap. Or you can engineer the language to be more flexible, but then you need better and better programmers to prevent the basic security flaws that come when people don’t understand the code they’re actually writing. You also need to control how the code is used… someone can write a chunk of code that doesn’t require input validation if they’re also writing the chunk of code that creates the input and they know that it won’t exceed the parameters, but if someone else rips out the chunk without validation and uses it somewhere else, you can get screwed 🙂

@ John

That sort of decision tree-based learning structure runs up against 2^n ugliness really fast. You’re going to exceed computational feasibility faster than you’ll get the system to grok even a basic problem.

Jay March 31, 2010 8:22 PM

@Clive

So you’ve doubled the components (complexity) and done away with all our present-day programming languages (barring maybe asm or BASIC) in the pursuit of the Harvard architecture… but the Harvard architecture is only marginally more secure.

  • Harvard architectures still can be attacked with return-oriented-programming… so smashing the stack would still be fun and profitable.
  • High-level flaws in software cannot be defeated by low-level design. Financial data transmitted in plain text would still be transmitted in plain text. Timing side-channels in encryption libraries wouldn’t go away. Bad RNGs (a la debian’s openssl) would still expose your keys. Heck, even path traversal would still be with us…

Lawrence D’Oliveiro March 31, 2010 9:11 PM

“Many Eyes Make All Bugs Shallow.”

Thus it is often said in the Open-Source world. And let me mention some pieces of well-tested, reliable Open-Source security software that are crucial to the underpinnings of large parts of the Internet you’re using right now: OpenSSL, OpenSSH, Netfilter/IPTables, GPG …

Nick P March 31, 2010 11:53 PM

@ Lawrence

Dude, get real. Open source can aid security, but doesn’t imply it. The apps and tools you listed have had plenty of bugs, many devastating in nature. The pro’s may have exploited them for a while before the run-of-the-mill hackers got in the act. You need more than open source to have security. If anyone needs proof, then just glance at the code quality of a large random sample of SourceForge projects that aren’t popular. 😉

@ Rob Lewis

Welcome back, Rob! I’ll start by saying that you’re one of the few marketing guys I don’t mind too much around here. You push your products, sure, but you provide evaluatable information and occasionally add to discussions. I commend you there. 😉

As for your product, I looked into it. From what I found, my original concerns were valid. With Trustifier, one must trust these to ensure security: hardware/firmware; DMA access unless IOMMU; trustifier itself; OS (kernel mode exploit still kernel mode exploit); trustifier & OS integration scheme; security policies of local system; architecture, operation & policies of distributed activities. Trustifier’s assurance activities only help on a few points: trustifier; trustifier & OS integration; OS security; policy enforcement. The huge kernel space attack vector is still there & the policies are where plenty of security fails. If the policies suck or people work around them to get work done, then the system assurance goes down. Then, there’s the network and application level policies. I still see a ton of risks for that. Correct me if I’m wrong, but the offering of the Ryu web solution implies that Trustifier alone isn’t enough to secure the network/apps.

I did look into the DOD activities and corroborated that the product was evaluated. According to the documents, it wasn’t Trustifier in general but a specific application customized to run on Trustifier that used the military’s classification scheme, which is very easy to write policies for. So, here’s what we had:

  1. Trustifier and regular OS.
  2. Cross-domain solution specifically designed to use Trustifier’s security properties.
  3. Security policy for that solution that was both easy to do and fixed.
  4. Probably a fully up-to-date, patched Linux platform.
  5. And Red Team was given administrative, root access.
  6. Result: They failed.

So, what to take from this? Well, for starters, a system with security flaws in it might have faired differently. If they had a flaw in the kernel, common with Linux, then they might have circumvented Trustifier. Additionally, the scenario was almost ideal: good security kernel+OS without known exploits + conceptually simple app designed for kernel + conceptually simple (read: not like commercial policies) policy = they failed. I speculate that if any of these were different, the results might have been as well. The other issues are source code, time and talent. An EAL6+ evaluation requires all TCB source code to be given to NSA hackers for as long as they like. I think Green Hill’s product was pentested for >8 months< before NSA OKayed it & was very simple compared to your platform. To come close, Trustifier-based platforms must: come out unscathed in wake of application-level, 0-day user-mode, or 0-day kernel-mode OS attacks; survived attacks by well-funded pro’s targeting Trustifier specifically; pass long, open-source pentesting. The platforms & methodologies I previously mentioned achieved this. They didn’t just say they could: they did it already & continue to build on it.

However, I do appreciate the update on the new info. If you would post links from non-company sources, then I’d be glad to update myself on it. I’m also awaiting the release of the collaborative solution that was pentested. The reason I can’t go with Trustifier for general stuff is that it’s MUCH harder. Web servers and web browsers, for example, don’t lend themselves to security kernel techniques without lots of porting effort. Although you espouse the wonders of Trustifier, but my counterclaim has always been that Trustifier isn’t the root of trust. The Trustifier platform consists of many components at many levels, quite a few being complex & hard to get right in commercial world. The Ryu solution your company offers tends to support this: it’s a ton of functionality & behavioral policies geared towards ensuring security, with Trustifier acting as a support role. Much of the security functionality obviously depends on the OS or app-level security rather than Trustifier’s mechanisms. A good security kernel doth not a secure web server make. That’s why we need the fine-grained OS’s, good architecture, and the SDL-style development methodologies.

As usual with you, I end the very critical analysis with a bright outlook. I told you before I like the product & I hope to see more evaluation results, especially those new links I asked for. I’ve mentioned it in a few posts as a competitor to SELinux, so you haven’t wasted your time here. I seem harsh, but I’m quite fair. 😉 I just don’t think the platform as a whole deserves even a medium assurance label until much work has been done, especially on assurance effects of policies, kernel-level flaws and OS integration. Of course, I’d say the same about competitors like Solaris 10 Trusted Extensions/Containers, BSD Jails, and SELinux.

Finally, one doesn’t have to rewrite the whole stack to achieve the assurance I’ve talked about. One could do a lot with legacy apps just by using a secure kernel, a few trustworthy support services, a trustworthy virtualization layer, and slight reengineering of apps on legacy OS’s. It’s not a lot of work & most is already done, which is why I espouse these solutions. If Trustifier completes even an EAL5 evaluation or get’s Type 1 certified for something, I’ll give it equal consideration in all posts on the subject. That’s a challenge to you guys. In the mean time, more links please. 😉

Nick P April 1, 2010 12:09 AM

@ John [on auto-programming]

Automatic programming has been a dream of mine for some time. I spent a year or so heavily wrapped up in AI just to pull it off. I learned about compilers, natural language systems, problem-solving engines, etc. I kept pushing the idea as hard as I could to figure out why we haven’t been able to do it. Here’s my hypothesis for your consideration: it’s not the problem-solving or programming that would be hard, but requirements understanding.

Have you looked into what it takes to understand and solve a simple, common and random problem? There’s a surprising amount of background knowledge, logical/physical/linguistic rules, and domain knowledge to consider. Additionally, a machine that could learn in spite of massive uncertainty and produce rational results is non-trivial. Projects like Novamente have brought it forward a long way, but machines that have “common sense” or even a 3 year old’s intuitive thinking capability still don’t exist. The reason is that there is too much information to program. A general-purpose software generator would require something like a high school student’s understanding of human subjects, plus plenty of domain knowledge in common collegiate & software-related fields.

Even though our brains are the best learning & problem-solving tools known to man, we still take almost two decades to reach the requirements I specified. Even if we create an architecture for an AI like you mention, we will have to train it. Even if we only have to train one, we must get the architecure, knowledge, learning experiences and inference rules right the first time. Each new attempt is a tremendous investment to produce the common sense & background knowledge in the new form. The Common Mind and Cyc systems are supporting evidence for my point: they illustrate this problem exists because some of the best AI researchers are pouring that much effort into it. Just think: hardware; software; architecture/design; training sets; ontologies; tying ambiguous natural language to all of that efficiently; process for concepts to requirements to effective design to efficient code. The last part will be the easiest, as we already have a lot of that. I think automatic programming is a good long-term goal, but I would say producing such an agent will probably require as much effort as the Human Genome Project… for the first good try.

Winter April 1, 2010 4:24 AM

This is more or less the subject of Andy Updegrove’s eNovel: The Alexandria Project.
(http://www.consortiuminfo.org/standardsblog/article.php?story=20100117193642603)

Now, Andy wants to make this a factual correct thriller about computer security (and security theater). But he is no security expert. He is a lawyer who has done a lot for Open Standards and Free Software. So he knows the people very well, but the technicalities much less so.

I like the novel very much and would like it to continue in the best technical sense. So I try to help out where I can. Which is VERY little indeed.

But if any of you would like to tell the world how a REALLY secure computer system would have to be set up, you could advice Andy on the technical aspects. If you can convince him, you might see REAL security explained to the unwashed readers of Dan Brown and Robert Ludlum.

You can start right away. In the last chapter, a funny way is introduced to harvest the eye-balls (really) VC investors are so keen about.

http://www.consortiuminfo.org/standardsblog/article.php?story=20100328130913301

However, Andy is not sure about the technicalities. I tried to come up with a scheme that was hare-brained enough to do justice to VCs, plausible from a human perspective, and technically correct. If anyone would like to correct my mistakes and come up with a better story, please:

http://www.consortiuminfo.org/standardsblog/comment.php?mode=display&format=threaded&order=ASC&pid=23291

(down at the bottom of the page, comment posted Thursday, April 01 2010 @ 01:35 AM PDT )

Winter

Clive Robinson April 1, 2010 4:39 AM

@ Jay,

We have had part of this conversation before, But I’ll go through your points.

“So you’ve doubled the components (complexity)”

Err probably not (certainly not in my hardware prototype any way). Have a look at the diference between CISC and RISC architectures.

CISC was based on an idea that by making instructions “do more” you’ld save memory which was very very expensive at the time (upwards of 1000USD/64K). This is nolonger true and the problems have moved to memory IO bottle necks.

Thus most code spends more time being shifted around in memory than it ever does being executed, and the CPU blocks on memory…

A consiquence of CISC is so many instructions that you have a great deal of redundancy in the instruction set and this makes Malware attacks significantly easier (Make your own shell script in ASCII if you want to see why).

I would argue that dropping CISC in favour of RISC would gain significant advantages in terms of silicon real estate.

Thus whilst I have conceptualy “doubled” in fact I’ve thrown out most of the usless and un-needed “complexity”.

So much so that you could put many general purpose compute engines under one restricted function engine. Which is actually adventageous.

“and done away with all our present-day programming languages (barring maybe asm or BASIC)”

Compleat twaddle, and you should know that.

If you are actually arguing that the majority of higher level programing language compilers and language tools are written in C or use the C library interface fine. But there is no reason for them to be, and your argument boils down to “C is the translation code of choice”, and unfortunatly as most code cutters don’t know how to behave safely let alone securely C takes it’s bagage with it where ever it goes. Look at it this way it’s like alowing a bunch of 5 year olds unrestricted access to a “tool shop without safety guards”, you know it’s going to end in a world of hurt for everybody involved.

With regards,

“… Harvard architecture is only marginally more secure.”

That depends on how you use it. Because of C most Harvard architectures have been weakened, and it is this weakening that has allowed the gadget attacks to be possible.

As I said earlier we have had this conversation before. When you say,

” Harvard architectures still can be attacked with return-oriented-programming… so smashing the stack would still be fun and profitable.”

I pointed out it was due to the “extras” added to the Harvard architecture that made this possible. Thus I posted a link to a paper that makes the same claim as you but clearly shows that it’s the “extras” that are responsable. Thus I used the term “Strict Harvard architecture” to differentiate.

” – High-level flaws in software cannot be defeated by low-level design.”

This is what many in the US call a “strawman argument”. I can make the safest car engine in the world but it won’t stop you putting it in a dangerous car, and it won’t stop a drunk driver using it to smash their way home leaving piles of mechanical and human wreckage in their wake.

A simple way to make most code more secure would be to properly deal with “exceptions” in all their various forms. However this needs a fundemental change in mindset of a programer from “Gung Ho charge the cannon’s down” to a more statefull way of thinking.

“Financial data transmitted in plain text would still be transmitted in plain text.”

That is a fault not of the system or the programers but those at the top. And incidently in of it’s self is not actually bad. You have to have it in “plain text” at some point to allow it to be processed. It is a question of where you set your boundries and how you implement them.

“Timing side-channels in encryption libraries wouldn’t go away.”

No but again this is not architecture related and it has some inherant problems that you appear unaware of (the problem came about due to trying to solve another problem, and in all likleyhood the solution will open up another attack for other reasons). The issue is doing crypto in software on an unknown platform.

“Bad RNGs (a la debian’s openssl) would still expose your keys.”

RNG’s are (as you might know if you are a longterm Bruce’s blog reader) a subject close to my heart. If you are refering to the problem I think you are it was actually a deliberate choice by a programer to make a change that made it insecure…. And yes the last time I looked there where still people out there using weak PK certs based on it…

“Heck, even path traversal would still be with us…”

Again an issue that is not realy anything to do with the CPU architecture.

And thus I can only conclude you don’t actually understand the issue.

There are all sorts of “assurance” issues at every layer in the stack from the cluless/malicious (ab)user down to the wires and components that leak data via EM and audio radiation.

For a secure system you need to resolve all of the issues. This can be by fixing them or mittigating them it is a design choice at that level in the stack.

However fixing a problem at a lower level will not stop poor choices further up the stack leaking information at a high level.

Importantly fixing high level problems will not stop poor choices further down the stack leaking information at a low level.

Worse any fixes at a high level can always be side steped by “bubbling up” from a flaw at a lower level (when the flaw is to small to be visable this is also known as the “Champaign bubble effect” that is the effect of the flaw only becomes visable considerably higher up).

Most malware get’s control by one of two routes, Through the user, or by “bubbling up” from a lower level.

No technical solution (other than maybe the bullet) can solve the user issue. But there are partial technical solutions to the “bubbling up” explotation of a fault or flaw.

The question is where and how do you expend resources to resolve the issues at lower levels.

One way is to get rid of a very badly flawed architecture that positivly encorages “bubbling up” by it’s very inherant design.

The question then becomes will the market alow it.

Saddly at the moment we are stuck with the iA86 architecture even Intel admitted defeat and binned it’s iA64 architecture in favour of the AMD64 solution. The question is will the “business environment” allow “natural selection” to rid us of this “Saber tooth tiger” evolutionary dead end?

Even it’s designers Intel know it’s doomed, but they appear locked in a “danse macabre” with amongst others AMD, driven on by the maddened cajoling of the carousing consumer market. Which leaves the question what happens when the music stops?

John April 1, 2010 9:46 AM

@Pat That sort of decision tree-based learning structure runs up against 2^n ugliness really fast.

I have absolutely no doubt what so ever about that with our current level of understanding and technology. What I described would most certainty pass the Turing test with ease.

@nick p
What I described would absolutely require the system to be intelligent by all definitions of the word.And like I’ve mentioned earlier, we are still groping in the dark. We’re not entirely certain what areas of study are required. And we still don’t how much we need to improve our capabilities.

Such a system as I described would for all intents and purposes be a fully sentient being.

Nick P April 1, 2010 2:33 PM

@ John

Thanks for your clarifications. Although I don’t think it would have to be a “sentient” being, I agree it would have to be as smart as one. Definitely a long, long, LONG term goal. Maybe it will happen if the government funds more brilliant researchers like the Minsky’s, Hillis’s, and Conway’s of old.

@ Clive on Intel’s processor

This reminds me: POWER7 came out recently. That’s an average of 6-8 cores, 3MB L3 cache per core, 3.5GHz clock speed, RISC instruction set, virtualization and compatible with both servers and mainframes. :0 It also caught my attention because most separation kernel vendors target POWER architecture. I will be looking into getting one of these things in a laptop (read: portable desktop tethered to A/C outlet).

I’ll tell you what I want, though, Clive. Just a POWER, MIPS, ARM or x86 chip built with high assurance techniques. It needs to be energy efficient, small, and have OK performance. I’m tired of worrying about processor, firmware, MMU, VT and cache attacks. Need a decent chip done right without significant flaws. Then, we can build good stuff on top while retaining most legacy compatibility. I found out that Rockwell-Collins offering in this area, AAMP7G, is made for “deeply embedded” use. That explains its 100Mhz clock rate, but I need something a bit faster. Potentially increase the surface area, power consumption or something of that chip to do the job? Think there could be a quick fix for my chip problem there?

David April 1, 2010 5:00 PM

@Clive: You missed two varieties of malware.

One is software inserted into the system with the permission of the owner, through misrepresentation or social engineering. This is probably the most prevalent form of malware by far.

Another is taking advantage of capabilities of the system that were designed in. An example of this would appear to be the PDF hack, which seems to rely on the fact that PDF as designed is a fairly capable programming language specialized in several areas. Another example would probably be the recent pwn2own contest with the iPhone.

I find computer security to be a frighteningly complicated idea.

Clive Robinson April 1, 2010 5:44 PM

@ David,

“You missed two varieties of malware.”

Hmm,

I said,

2, Malware comes in two basic flavours,

2A, Code that is inserted into a running system without the permission of the system owner.

2B, Code that uses defects that are already in the system via the actions of the system owner.

Now I think we may be splitting hairs

Your first “missing” is,

“One is software inserted into the system with the permission of the owner, through misrepresentation or social engineering. This is probably the most prevalent form of malware by far.”

You say “misrepresentation”, I say “without permission” (2A) and I would say they are actually the same thing.

I would argue it thus,

If I sell you a “radio alarm clock” with a miniture radio mic and video camera hidden in it that you don’t know about. You put the unit next to your bed. To you the unit appears to fully work as you would expect a radio alarm clock to do.

However I sit outside your house at night and record you and your significant other in private moments and then sell this over the internet to who ever wishes to pay for it.

Yes you gave permission for the radio alarm clock (your point), but no you did not give permission to have the “trojan horse” element that gives rise to your “private moments” becoming all to public (my point). Yes I misrepresemted what I was selling you but it was not an “inocent act” it was deliberatly hidden attack because I believed there was no way you would “give permission” to have you private moments made public.

So I think we are talking about the same thing just a different way.

As to your second “missing”,

“Another is taking advantage of capabilities of the system that were designed in. An example of this would appear to be the PDF hack, which seems to rely on the fact that PDF as designed is a fairly capable programming language specialized in several areas.”

I said in 2B,

“Code that uses defects that are already in the system via the actions of the system owner.”

I’m assuming that installing Adobe software was the action of the “system owner” either directly or indirectly or as a known pre-installed at purchase time.

The problem I suspect is with me trying to say things in a general way to cover many cases in as brief a way as possible, I get complaints if I’m too long winded (ask Nick P 😉

Clive Robinson April 1, 2010 7:17 PM

@ Nick P,

The POWER7 how things have changed over the years.

I played with various POWER chips back in the last century and kind of lost contact with them (other than when playing games on other peoples consoles 😉

It looks like a bit of a beast and like most high performance CPU’s internaly it’s Modified Harvard Architecture (sometimes called Harvard Cached…). That is it only goes von Neumann down stream of the cache (just to keep programers and OS designers happy ;).

This modified architecture has always struck me as a real cludge, as missess on the cache cause some problems, but oh boy the extra hardware required to support “self modifing code” is a real waste of silicon real estate.

Also without care it can “stall” the CPU and throtel it right back to external memory idle speeds (Oh look 3.5G CPU running like a 350M or lot less CPU ye-ouch).

I am interested in what they have done to the “common cluster” memory. I will have to dig around a bit and see what they are upto, it sounds suspiciously like the “virtual window” method I have on my cludge. That is the design I have gets around parellel processing “messaging” and apparently so does IBM’s…

Also some POWER designs have a hardware profiler built in as standard which with realtime access to a second or hypervisor CPU would allow the “signiture” to be checked.

Now if they would just dump the von Neumann cludge and used the saved silicon real estate to build a hypervisor, then you could have a very secure architecture quite easily 8)

By the way the easiest way to get around the von Neumann problems is to have a MMU under the control of the hypervisor not the CPU that uses it.

The chances are they may well be 90% of the way there with their cluster common memory.

So… It’s all doable 8)

Now how much do you think we’d have to pay to get the changes made 😉

Nick P April 2, 2010 12:47 AM

@ Clive

Perhaps I misphrased it, but my biggest question was whether you think AAMPG7 processor could be easily modified to, say, go faster & support more memory resources. Since I’m not a hardware guy, I’m wondering if a deeply embedded, slow, low power processor can be supercharged simply by increasing its power, chip surface, etc. without major changes to the design. For your review, here’s a PDF covering the processor, its verification & Rockwell Collins development platform. It gives specifics about the chip.

http://www.csl.sri.com/users/shankar/VGC05/Hardin.pdf

Although your architecture enforces POLA at a fine-grained hardware level, we need some good stuff as a stepping stone. A high assurance, POWER-like processor is a nice start. I’m wondering if it would be easy to get this chip to do over 100MHz if I allow watts and parts cost to increase a bit, but without much more development costs.

John April 2, 2010 4:31 AM

@Clive The problem I suspect is with me trying to say things in a general way to cover many cases in as brief a way as possible, I get complaints if I’m too long winded (ask Nick P 😉

Good one Clive. Then I noticed the date.

David April 2, 2010 3:52 PM

@Clive: Are you calling Adobe software products defects? Not that I’m vigorously disagreeing or anything….

I’m making a distinction here between software that is itself buggy and software that’s vulnerable because it contains a programming language. From what I’ve heard, the latest PDF exploit will work beautifully with an implementation that adheres rigidly to the PDF spec, and is completely bug-free.

I think my disagreement is not so much in the denotation as the implications. Malware that is installed on the computer without permission suggests that the right thing to do is to prevent any software from being installed without permission, and existing bugs in products suggests that one possible thing to do would be to only get software that passes rigid security audits.

In reality, we have to deal with the far more difficult problem of filtering out malware from the good stuff, one problem being that malware usually flouts the RFC and fails to set the evil bit, and another being that it’s not in general possible to tell what a program is going to do.

Nick P April 2, 2010 10:48 PM

@ John

LOL. Sure is hard to be general and catch every corner case. It’s what I try to do, so feel free to be a bit wordy so long as the words aren’t wasteful and there is still only one guy on the blog writing an essay a post. (cough Clive cough)

@ David

Good points. It was actually FSMLabs argument against Green Hill’s overmarketing of their EAL6+ certification: if a necessary feature or security measure isn’t in the requirements/specs, then the rest of the lifecycle doesn’t even matter. Their example was resistance to flooding, a form of DOS attack. So, if the requirements specifically adopt a non-secure approach, then one of three things has happened: a tradeoff was made; developer didn’t properly leverage platform security features; platform lacks functionality needed for secure operation (e.g. Win95’s access control). For PDF, it’s mostly a tradeoff but we can easily see the others in it and most vulnerable apps.

I don’t think we have to look for the mythical evil bit to stop much malware. The lack of POLA in modern operating systems is the cause of many problems. Lack of trusted boot, trusted path, unnecessary complexity, and API’s w/out behavioral specs or inconsistent with them cause many more. These issues have all been fixed in some platforms, but the mainstream ones don’t have those fixes. I mean, why should a PDF viewer need administrative access (fixed now), unrestricted networking, unrestricted file system access (partially fixed), etc. Or a web browser, mail server, etc.? They don’t. Capability-based security with controlled propagation is the best method right now, used in many secure OS’s & frameworks. The old Biba integrity model can help a bit too: Microsoft used in in Vista/Seven to restrict the web browser & other apps default abilities. Now, if something wants admin access, the user must allow it in a “trusted” dialog. Is the dialog trustworthy? Probably not, but solutions exist for that too.

So, the problem that causes malware is lack of good security principles in OS design, incoherent/incomplete specifications, and poor implementation. I have expressed my doubts to Clive that security flaws due to tradeoffs, like your PDF scripting example, could be defeated with his design. That’s an example of subversion in a way these systems aren’t designed to detect. It’s good for extremely fine-grained POLA, IMO, but I don’t think it’s practical except in highest security cases right now. Removing the issues I mentioned would stop most of the problems, except for subversions like PDF issue or social engineering.

Robert April 2, 2010 11:57 PM

Should the Government Stop Outsourcing Code Development?

This raises only one question in my mind :
Does the government have a choice?

I know that in the field of commercial Hardware and software I certainly have no choice but to embrace out sourcing. IMHO The trend has shifted so far that there is no turning back. Today it is both high quality and high quantity engineers that come from Asia. Any talented young European/ American should avoid the HW/SW engineering fields altogether.

So the answer to the original question is heck NO. The government needs to embrace the trend they started. Somehow they need to build in the checks that contain the obvious outsourcing risks, and they don’t have much time to perfect their oversight system.

Avoiding out-sourcing is stupidity today but within 10 years it will be regarded as unconscionably negligent conduct reserved strictly for “good old boy” defense contracts.

Rant-over…

Clive Robinson April 3, 2010 4:32 AM

@ Nick P,

I’m having a look at the AAMPG7 paperwork it is interesting in quite a few ways.

However it’s a “double bank holiday and Easter break” in the UK and I have a not so small destroyer of the peace and calm to look after and entertain (whilst his mother goes off to do whatever it is she does).

So reading and contemplation are a little difficult currently (I now know why the “Landed Gentry” employed people to look after their offspring and how the Victorian era came up with “Children should seldom be seen and never ever heard” 😉

Nick P April 3, 2010 11:27 PM

@ Clive

No prob, man. It’s busy for me this Easter, too, in similar ways. I’ll look back here in a few days to see if you’ve posted an answer. And remember, it’s AAMP7 or AAMP7G. Because it’s one the bottom of the “most searched terms” list, one must spell it right to get good results.

Dave Williams April 17, 2010 4:53 AM

You’ve probably addressed this before, and my opinion on this is possibly simplistic, but I had forwarded your newsletter to a fellow admin with whom I have an ongoing debate about security. I appended the following comment. I apologize in advance for the tone – I was beating a dead horse, and the message was written for someone else.

[snip]

Application software, sure. But the entire United States, except for parts of NASA – runs on Microsoft Windows. The NSA runs Windows. The Army runs Windows. The FBI runs Windows. The entire bureaucracy runs Windows. Which means they’re all tied to a single security hole – Microsoft. Not just the usual virii, Trojans, malware, etc., but Microsoft has 40,000-odd employees scattered around the world. Microsoft Vista is reported to have over 50 million lines of code, and not all of it was written by Microsoft employees, either – they outsource a lot of drivers and miscellaneous application development.

You really think every single line of that code has been vetted for security, exploits, or back doors? You don’t have to be obvious, you just have to leave a weak spot in an unlikely place.

Once you commit to an operating system – any system, not just Windows – you inherit the developer and distributor as part of your security zone. Yeah, at the corporate level you’re right not to worry about it… but if you’re the Department of Homeland Security, you should be concerned that Microsoft has some of its programmers overseas, and that even if they were loyal Microsoft employees, they and their families are subject to influence by your enemies. How much would al-Queda pay for a back door into the NSA? And, really, how much would it cost, if they were willing to be nasty about it?

The same thing, by the way, applies to Cisco, which provides the bulk of the heavy networking equipment worldwide. Way too much of the military and government assumes continuous internet access nowadays; from personal knowledge, I know the US Air Force would come to a near halt, since they’ve come close to their “paperless office” goals.

[/snip]

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.