Security as Interface Guarantees

This is a smart and interesting blog post:

I prefer to think of security as a class of interface guarantee. In particular, security guarantees are a kind of correctness guarantee. At every interface of every kind ­ user interface, programming language syntax and semantics, in-process APIs, kernel APIs, RPC and network protocols, ceremonies ­-- explicit and implicit design guarantees (promises, contracts) are in place, and determine the degree of “security” (however defined) the system can possibly achieve.

Design guarantees might or might not actually hold in the implementation ­-- software tends to have bugs, after all. Callers and callees can sometimes (but not always) defend themselves against untrustworthy callees and callers (respectively) in various ways that depend on the circumstances and on the nature of caller and callee. In this sense an interface is an attack surface --­ but properly constructed, it can also be a defense surface.

[...]

But also it’s an attempt to re-frame security engineering in a way that allows us to imagine more and better solutions to security problems. For example, when you frame your interface as an attack surface, you find yourself ever-so-slightly in a panic mode, and focus on how to make the surface as small as possible. Inevitably, this tends to lead to cat-and-mouseism and poor usability, seeming to reinforce the false dichotomy. If the panic is acute, it can even lead to nonsensical and undefendable interfaces, and a proliferation of false boundaries (as we saw with Windows UAC).

If instead we frame an interface as a defense surface, we are in a mindset that allows us to treat the interface as a shield: built for defense, testable, tested, covering the body; but also light-weight enough to carry and use effectively. It might seem like a semantic game; but in my experience, thinking of a boundary as a place to build a point of strength rather than thinking of it as something that must inevitably fall to attack leads to solutions that in fact withstand attack better while also functioning better for friendly callers.

I also liked the link at the end.

Posted on August 13, 2014 at 2:11 PM • 19 Comments

Comments

CatMatAugust 13, 2014 4:26 PM

I like that. Modeling the security aspects of an interface as a defence surface tend to put one in the mindset of asking "Should I reject this input?" instead of "What could I do with the input?"
Only accepting inputs that match the tested patterns is far more secure than rejecting known malpatterns.

And that end link is true gold. I wonder how hard it would be to write a self-propagating execution engine in emf?

Holmes WilsonAugust 13, 2014 8:28 PM

I like it too.

Through it, the NSA scandal becomes about systemic integrity–a broad failure of systems to work as they should– rather than an abstract idea of privacy.

They're invading our sacred spaces, subverting products, undermining oversight, weakening other countries' democratic institutions (by spying on leaders or negotiators), and so on.

JeroenAugust 14, 2014 2:24 AM

A stimulating read!

At the conclusion of the article he states that the Security vs. Convenience dichotomy is false.

It is my belief that this is mostly true for secure (proper) software development. However one can still stipulate that secure programming is more cumbersome than insecure programming. For example, I've met a LOT of developers who would let some sort of garbage collector handle all de-allocation instead of cleaning up themselves.

In the grander scheme of the security business the dichotomy still holds as far as I am concerned. At least a general interpretation of it. For example, more secure password policies ARE most of the time less convenient than the less secure password policies. If security was convenient, we all would be secure?

However as with many popular short quotes and sayings, nuance is lost.

Gerard van VoorenAugust 14, 2014 4:05 AM

"Perhaps because ASLR was not (to my knowledge) clearly documented as a temporary cat-and-mouse game, engineers have come to rely on it as being the thing that makes the continued use of unsafe languages acceptable. Unsafe (and untyped) languages will always be guaranteed to be unsafe, and we should have used the time ASLR bought us to aggressively replace our software with equivalents implemented in safe languages. Instead, we linger in a zone of ambiguity, taking the (slight) performance hit of ASLR yet not effectively gaining much safety from it."

I agree with this statement. I think there are two questions here.

1. Which language should we use? I think this question ends up with "a bit of all", which leads to question 2.

2. If we end up with a mixture of languages, I still think we should have a uniform "interface" or API to let the languages communicate. Today that is still "extern C", but ... we need something better.

Clive RobinsonAugust 14, 2014 5:01 AM

Whilst I agree in general with the idea, it has it's own perception gap that will cause problems.

Ask yourself where should you realy do security or error checking, right up at the front of the programe (left side) or where it actually matters in the programe (right side)?

Logicaly you put it where it is actually required which is way over to the right in the actual program logic, bound as tightly as possible to the part of the logic where it is needed. Thus the logic becomes self protecting and as highly specific as required, which means you can use tight criteria and strong checking withou fear or favour. Which makes the priority "reject over accept", as well as making the code very much less suseptable to changes in other parts of the code.

However in practice this is not what happens, the mantras of code reuse refactoring and coder efficiency says make it general and move it as far to the left as possible so exception handeling code is minimised or even eliminated. The problem with this is it opens up lots of security holes, I could go through some of them but the list is longer than my beard (even in near zero point print ;-)

But two issues crop up over and over again with left side checking,

Firstly even tiny changes effect many functions, trying to check them all is a fools errend at best.

Secondly as technology improves code moves, the classic example being the move of server side code into web browsers, what shifted was the left side code with most if not all of the error and security checking... opps, it did not take long for attackers to exploit that little faux pas. Whilst this now seems obvious, it actually still goes on as code gets moved from pilot project to initial production and numerous up scalling steps there after.

Much time and effort becomes wasted with left side error and security checking and it realy should be treated as very poor programing practice.

But there is an issue with right side checking, which is what to do with failures etc that get caught deep inside the code do you just drop them at that point, abort the program, or run it backwards to the left, such that error reporting and error correction can work in the way users, administrators and others would expect.

Obviously the last option is preferrable where possible, and that is the problem, programers will throw their hands in the air and say it's not possible "no way no how" which of course is a lie.

Doing it requires discipline and building in from day zero, all interfaces need to be bidirectional not just in control information but data as well. This alows such problems as hardware errors and overloads to be dealt with efficiently in multi-tier and distributed systems, and has been around for many years in one form or another such as journaling file systems and high availability database systems.

This however brings up an issue that few people are aware of or even comprehend which is security also needs to be bi-directional otherwise a down stream attacker can use techniques to force false entries upstream that then get treated as security checked and go down stream via a different path, that is not open to an attacker due to normal security checking.

The problem with doing all of this on the right side is that it means a lot more code has to be written, and programers have to get their heads around proper security and error checking in both directions, non of which is going to make short term management happy. However forward thinking managment should at the least see it as an acceptable quality process and further see the down stream savings in maintenance and upscaling costs.

Clive RobinsonAugust 14, 2014 6:21 AM

@ Gerard von Vooren,

With regards languages, most are actually unsuitable, unless they can make and handle multiple value returns from a subroutine in a standard and safe way. Which has further implications such as null values that cannot be treated as valid data etc and what to do if a value is either unexpected, missing or out of range.

Further is the issue of having comms / interfaces with both data and control channels that are both bi-directional, this has six way synchronisation issues that need to be correctly addressed. Most current OSs do not do this or do it badly, so that needs to be addressed as well.

Thus you would tend to start looking at a minimum of RTOS systems with the appropriate languages (ranging up to highly parallel systems from the hardware up).

Judging by what happens with threads currently many programers are nowhere near ready to make this sort of transition, which in of it's self is just a halfway house to what we should be doing with highly parallel systems (which is where the future of computing is going to be).

Gerard van VoorenAugust 14, 2014 8:06 AM

@ Clive Robinson

I agree that most of the languages are unsuitable, but if the main dominator, C, is gonna be replaced with safer alternatives (that is multiple languages), and there is still only "external C", then AFAIK the weak link is still that interface.

Wirth showed a long time ago that it is possible to have type safe compiled modules / packages that can be linked together at runtime. Yet the shared libs in the *nix world are AFAIK still not type safe. I am not even discussing the concept of contracts.

The same with state machines. There we just push a load of bytes and expect that the state machine is doing all the bitshifting, filtering etc.. While this concept works rather well it could be a lot better IMO.

Why not have an interface such as an Ada .ads file? A universal human readable text file with function and type declarations (without macro and XML crap). And yes, maybe that should have multiple returns, threading / processing and other features, but such an interface file could than be processed by the programming language. A shared library or state machine should then have a binary implementation of that interface file. So in fact I am talking about modern, advanced, type safe C header files.

AnuraAugust 14, 2014 12:13 PM

@Gerard van Vooren

What you want are modules. Header files are for show in museums; modules that are created by the compiler that list function signatures and all that are what we should be doing. I have a language in my head that does exactly that; it creates what I am calling a manifest file for each library which contains the function signatures, types, and optionally a textual description (for which I am ripping off XML comments in .net).

Of course, my goal was not type safety, it was to minimize work and the type safety is just a bonus. Especially, it was so you didn't have to write header files, as well as to allow a newbie to create a multi-file project without having to mess with things like make files while not requiring the use of an IDE; it has an explicit folder structure that has to be followed, and as long as you follow it then it magically does everything for you (although for more complex stuff you have to specify build config files).

Nick PAugust 14, 2014 1:41 PM

@ Gerard, Anura

Modula-2 has modules with interface and implementation separated. It's also a typesafe language used for OS's, compilers, apps, etc. Modulipse is an Eclipse plugin for Modula-2 development whose site implies it can autogenerate some of these. One can compile the production code with Excelsior's awesome freebies.

So, that's a bit in the direction you want to go.

Gerard van VoorenAugust 14, 2014 4:24 PM

@ well everyone

Yes, I mean modules. However, and that is what the article is about, we are talking about interfaces and contracts here.

It would be nice if there was one replacement language for C, but it looks like that is not gonna happen. C++ didn't take that place and neither did Java. Now there is Go and in the near future Rust.

So let's assume there is not one single language that will take over the dominance of C. Let's assume there will be multiple languages.

Wouldn't it be nice if they didn't interface to each other with "external C" but with language independent interface files that are type safe and define clear contracts? Interface files that turtle all the way down?

The benefits are clear. Modules can be replaced very easily. And also the "battery included" languages can use standard modules in stead of their own "batteries".

The downside is that compiling from source requires multiple compilers and the interface files need to be well designed and very strict.

AnuraAugust 14, 2014 5:17 PM

@Gerard van Vooren

There are languages that could replace C, like Ada, although it's unlikely that people will stop using C any time soon.

Wouldn't it be nice if they didn't interface to each other with "external C" but with language independent interface files that are type safe and define clear contracts? Interface files that turtle all the way down?

Look at .NET CLR. Yes, it's high level but it basically does that. Something similar can be done for other languages, but there are problems in terms of language features. For example, with a language like C you can't handle exceptions, but C++ can... Now, you can call C from C++, but the other way? I think what needs to happen is there needs to be a language hierarchy:

Type 1: Minimalist, no exceptions, heap memory is manually freed and allocated (think C, but modernized)

Type 2: Exceptions, RTTI, not good for kernel development, have to think about memory, but provides ways to do some automatic freeing of memory (think C++, but possibly with tracing garbage collection for some types).

Type 3: Everything is handled behind the scenes, don't have to think about memory, focused on features and ease of use (think Java).

Type 4: Scripting language, like 3 but interpreted (think Python).

The idea is that if you design it right so that each language is, feature-wise, a superset of the previous (with some exceptions in terms of things like memory management), Type 4 can reference Type 1-3, Type 3 can reference 1-2, and Type 2 can reference Type-1 without effort. This would specify required features, standard library, data types, etc. and languages would have to conform to the specifications. There is no effort to work with languages within the same type, but you would have to provide a wrapper interface to handle things like the memory management if you want to work with a lower-level language.

Also, I wouldn't try and design this for any existing language, because there are problems there. Security is important - every component should have bounds checking and some basic constraints on things like making sure variables are initialized before use, and you aren't going to be able to keep backwards compatibility that way.

Nick PAugust 14, 2014 7:13 PM

@ Gerard

It's funny that you're arguments look a lot like those that led to a solution: CORBA. It supports multiple languages and platforms despite hardships Anura mentions. Interface files are defined, which code is autogenerated from. Unfortunately they kept adding to it until it was unbearably bloated. It's still used in military (including MILS kernels) and OKL4 integrates components using a version of it called CAMKES.

Another solution was designing the language or platform for cross-language development. Ada is one of few I know designed for that, but interfacing still had issues. VMS OS was a better model as it standardized types, calling conventions, etc to push integration onto compiler writers. It seemed to work as VMS OS and apps are written in 5+ languages.

Anura pointed out this was a specific goal of .NET's CIL. They also extend the possibilities with Singularity and Verve. I mentioned that there was an older mainframe that had a firmware Nucleus component that did memory calls, control flow, concurrency primitives, exceptions and more. The OS was written in code on top of it. In theory, your approach can be built on something like that as it's higher level and flexible.

Today, the work that looks into what you talk about falls into a few main categories. One treats code in different languages as distributed apps communicating over effecient protocols. ZeroMQ is best of these. Another subfield works specifically on specification and configuration languages specifically for integration, but usually more high level (eg architecture). Another is feature oriented programming where they extend IDE's to recognize features representing a chunk of changes and editions to repos which can be found added, removed, reintegrated, etc. A small niche is doing it for formal methods. Finally, enterprise software architecture posts and papers explore new ways go I integrate apps (esp legacy + new). They might have a gem or two.

So, there's some stuff for you to look into and think on.

Gerard van VoorenAugust 15, 2014 9:59 AM

@ Anura, Nick P

I got carried away yesterday. Nick P is right. An idea to "solve all existing problems" will probably end up being a problem itself. Think about that XKCD joke about standards.[1]

@ Anura, I like your hierarchal approach idea. It will never happen of course but it certainly makes sense.

However today the foundation is the weakest link. Fixing that is gonna be a major challenge. I wonder how Linus Torvalds thinks about this.

[1] http://xkcd.com/927/

AnuraAugust 15, 2014 12:50 PM

@Gerard van Vooren

We still have about 3-6 decades before programming languages are obsolete; that's plenty of time to develop the languages, a new OS, and all software in this new language to send C and C++ to the programming language aferlife with Perl and COBOL.

Gerard van VoorenAugust 15, 2014 1:13 PM

@ Nick P

Yes I read that article some time ago. I think it is related to the topic.

In that article there is an interesting link [1]. A quote:

1.3 The Facts

Ultimately, there are two compelling reasons to consider microkernels in high-robustness or high-security environments:

  • There are several examples of microkernel-based systems that have succeeded in these applications because of the system structuring that microkernel-based designs demand.
  • There are zero examples of high-robustness or high-security monolithic systems.

Still Linux runs everywhere. It is a mentality issue. We want it now and it must be fast! [2]

[1] http://www.coyotos.org/docs/misc/linus-rebuttal.html
[2] http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=all&data=u64

Gerard van VoorenAugust 15, 2014 1:26 PM

@ Anura

True. Let's see how the new programming languages work out.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.