Complexity and Security

I have written about complexity and security for over a decade now (for example, this from 1999). Here’s the results of a survey that confirms this:

Results showed that more than half of the survey respondents from mid-sized (identified as 50-2500 employees) and enterprise organizations (identified as 2500+ employees) stated that complex policies ultimately led to a security breach, system outage or both.

Usual caveats for this sort of thing apply. The survey is only among 127 people—I can’t find data on what percentage replied. The numbers are skewed because only those that chose to reply were counted. And the results are based on self-reported replies: no way to verify them.

But still.

Posted on January 29, 2013 at 6:32 AM34 Comments

Comments

atsacryl January 29, 2013 6:54 AM

^^ ^^

I think it is very true. ‘More complex a system is more likely to have fault’.

But complexity can be unusual to define. There is quite a difference between a complex house and a complex tree, between a room which is very messy, and a room which has much content but it is all tidily stored away.

Automation is helping a great deal, as well.

Paul January 29, 2013 7:22 AM

A simple example is one commonly seen at many businesses.

The school I work at forces us to change our password to access several online systems each 1-3 months. It must be at least 8 characters long and include a special character, number, lowercase letter, and uppercase letter.

Quite simply, most of the teachers I work with cannot remember that type of password when it changes so frequently, so they have workarounds.

Since each program forces a password reset at different intervals, some people just sync all their passwords to be the same, and reset them all on the schedule of the program that forces the most frequent password change. Others just change to a new password whenever the system prompts them. However, they can never remember the multitude of passwords or frequently changing passwords, so they keep an index card or Post-it with all of their passwords stored in or on their desk.

Steve January 29, 2013 7:31 AM

As in anything the more complex the more likely to fail.

I see simplicity winning even in sales. When my company offers a more expensive but single price solution more people reach for it, rather than a more customized one that will save them money.

Chris January 29, 2013 8:05 AM

Perhaps there’s an equivalent rule of thumb for legal/policy complexity just like the “15 – 50 bugs per 1000 lines of code”. Whatever the case, it’s certainly true that there’s more to go wrong in a large system. Just look at the human body: immensely complex with cell damage and infection going on all the time yet stabilized by all kinds of things including the immune system.

QL January 29, 2013 8:32 AM

Auditors and corporate ideology are part of this problem. Security becomes an exercise in blame avoidance by paperwork rather than being a rational process. And must agrree with the earlier comment re password policies creating issues.

Jason Richardson-White January 29, 2013 8:40 AM

The following comments are not new observations, but just ones that I have culled from others. Speaking generally, complex phenomena tend to have the following characteristics:

High Interactivity: it is extremely difficult to isolate one problem from another, to ameliorate its effects without causing unintended side-effects in problems that are (in some sense) “neighboring”.

Non-Linearity: By “non-linearity”, small inputs to a system are associated with large outputs from that system. While non-linearity is a characteristic of many natural systems, a case can be made that the probability of a modern problem’s being non-linear is generally higher than that of pre-modern problems.

Positive Feedback: structural relations tend to encourage amplification rather than diminishment of the problems; feedback that tends to encourage “runaway” amplification.

Closed: By “closed”, I mean tending increasingly to produce “trade-offs”. In modernity, genuinely new resources are increasingly difficult to find. So, ameliorating problems by using certain resources tends to augment other problems that require those same resources.

Distributed Topology: roughly speaking, complex problems are often “network-like” rather than “tree-like”. In a network, one need not traverse to a central node to get from one place to another. There are paths directly from one place to another. Such paths make control elusive.

Characterized by Power Laws: One might naturally imagine that the distribution of edges to nodes in a network is random any time that the evolution of the network in time is not centrally planned. As it turns out, this is often not the case. Instead of being “random”, these edges are distributed “exponentially”, in the sense that a small number of nodes tend to have a disproportionately large share of the edges. Systems (especially networks) that might otherwise seem robust are in actuality vulnerable, due to the presence of such patterns.

Path-Dependence: Complex systems are increasingly path-dependent in their dynamics, meaning that time-dependent changes cannot be reversed merely by retreating step-wise from a given state back to the state which preceded it. Increasingly, the path back is not the same as the path forward.

Cataclysmic Potential: This is the most obvious and therefore intuitive aspect of complexity. As systems become more complex, so the potential for (what we might call) “irreversible worsening” has grown.

hxx January 29, 2013 9:02 AM

Yes! complexity is out of control. Just look at SElinux: incredibly complex kernel patch to secure an exponentially complex linux kernel which is who knows how many multi millions of lines of code by now.

Vles January 29, 2013 9:17 AM

..so..
Instead of producing a matching password plus access token plus fingerprint and iris scan, you immediately walk unobstructed toward the coveted prize except you are shadowed by an eighteen year old with a M-16. If at any moment you are not who you’re supposed to be, you get shot.

paul January 29, 2013 9:23 AM

Some complexity is unavoidable, but way too much of the time it’s about trying to fix design mistakes or trying to eliminate discretion (which may also be a design mistake). Whatever simple policy you declare, there will have to be exceptions, and unless you design and implement your exceptions just right there will be exceptions to the exceptions. And so forth.

And then at a certain level of complexity (as above with the passwords, only moreso) people will start disregarding the official policy entirely and implement a shadow policy. Which in turn will reqire increasing complexity to make sure that the lies being told on the official documents all match up…

MikeA January 29, 2013 9:59 AM

I think Hoare’s Turing-award lecture hit it on the head: “… or so complicated that there are no obvious deficiencies”. To him, this was a bad thing, but to those who promulgate complex policies, it is an excellent way to deflect blame.

notdjbFan January 29, 2013 10:00 AM

@Chris, @hxx:

Daniel J. Bernstein, when designing djbdns and qmail, split off different features and services into mutually untrusting components. (qmail’s license has restrictions, primarily involving compatibility)

Quoting http://en.wikipedia.org/w/index.php?title=Djbdns&section=4#Design :

In djbdns, different features and services, such as AXFR zone transfers, are split off into separate programs. Zone file parsing, caching, and recursive resolving are also implemented as separate programs. The result of these design decisions is a dramatic reduction in code size and complexity of the daemon program that answers lookup requests. Daniel J. Bernstein (and many others) feel that this is true to the spirit of the Unix operating system, and makes security verification much simpler.

Quoting http://en.wikipedia.org/wiki/Qmail#Security :

qmail has a modular architecture composed of mutually untrusting components; for instance, the SMTP listener component of qmail runs with different credentials than the queue manager, or the SMTP sender. qmail was also implemented with a security-aware replacement to the C standard library, and as a result has not been vulnerable to stack and heap overflows, format string attacks, or temporary file race conditions.

Spencer January 29, 2013 10:07 AM

There is an interesting contradiction in that complex systems evolving naturally (think of the global biosphere) results in a very robust system being able to recover from major catastrophes . In human-built systems, complexity is usually a very bad thing whether for reliability or security.

So how do we learn to build complex systems like how Nature does it?

meryle January 29, 2013 10:55 AM

I like @Spencer’s comment.

The problem is that many human built systems (smaller business contexts) don’t have the opportunity to weed out the failure and improve. Systems in companies don’t get to evolve, they get put in place often with limitations and then get replaced with other systems that are developed with the same crunched budget/time.

RH January 29, 2013 10:57 AM

@Spencer: I believe the trick is that nature makes systems where the large-order behavior is very robust, while we as humans are interested in the survival of smaller elements that nature would shuck off without a second thought.

Nature might be interested in preserving Corporations, as an ideal. Men are interested in saving the single corporation that pays their paycheck

Brittany January 29, 2013 11:27 AM

Are there any examples of a simple set of constitution/bylaws or organizational policy that cover the basics of expecting trust (like an honor code, let’s say), without being onerous?

Clive Robinson January 29, 2013 11:42 AM

First of complexity cannot be controlled but it can be managed.

By this I mean that complexity in a system is a measure of the number of possible intereactions between the component parts. It is a form of entropy, the more complexity there is in a system then the more useful it is in terms of what can be done.

The way to manage complexity is very similar to project managment in a quality system.

To get the best return from a system you first have to work out what the subsystems are within it, you then try to increase the complexity in the subsystems whilst reducing the complexity at the interfaces between the subsystems.

Further you use the equivalent of “pipelining techneques” to remove various effects caused by applying positive feedforward or positive feedback as this tends to increase sensitivity within the system to changes. However whilst negative feedforward or feedback tends to make systems more stable you have to be aware of delays within the system that causes negative feedback to become in effect positive feedback.

Most complex systems provided they have predictability can have system equations developed for them which allows control theory and Digital Signal processing techniques to be applied to identify the problem areas. Whilst this is known to engineering student studying electronics or mechanics it appears to be an unknown field of endevor to most code cutters irrespective of their academic level.

Even where overal systems are not predictable or not amenable to analysis most of the sub systems can be designed to be so.

The question is are people prepared to put up with system design times going up by a factor of five or so just to manage complexity effectivly?

I suspect that the answer of many will be no, however the same discussions were had over quality systems 30 years ago and now we accept QC systems as a matter of course because the benifits far out weigh the process costs.

Clive Robinson January 29, 2013 11:57 AM

@ Jason Richardson-White,

_Non-Linearity_: By “non-linearity”, small inputs to a system are associated with large outputs from that system

Err no.

What you have described is the overal process gain of a system (however you chose to measure it) that might or might not be linear.

To be non linear the gain of the system would have to have a non linear relationship between the input signal and the resulting output signal.

That is if you put a sinwave into a simple amplifier system and you got something that was not a sine wave out then the system would be non-linear.

The effect of non-linear behaviour can be modeled by a sum of harmonics equation due to the way the signal is transformed. The simple example of this is when the input goes beyond a certain threashold the output hits the rails and you end up with an aproximation of a square wave which has a weighted sum of the mainly odd harmonics.

Jason Richardson-White January 29, 2013 12:29 PM

@Clive,

You got me. I should have said, “disproportionately” large. But I think my point is still well-taken — you got it well enough to correct me. 😉 Thanks for putting better than me what I meant.

Speaking as a sys engineering student, one of the more important heuristics of the systems architect is never to build a system that you can’t understand. For example, I am skeptical of a variety of attempts to create the next generation of computer merely by mimicking the way that the brain operates. If we were to succeed, beyond our wildest imagination, in creating a computer with human-like intelligence, we might be able to talk to it — but we wouldn’t know why it worked, or how to correct it if it started to misbehave (thus with the sci fi genre).

Jason Richardson-White January 29, 2013 12:32 PM

Re: just posted
I’m not saying that I’m worried about it, in particular. I’m saying that the motivation to simplify complex systems is well-founded, from a theoretical perspective. Well-understood assemblies, hubs, or no, there are probably at least as many examples of complex systems that have failed due to undue complexity as have succeeded (e.g., QMSs, or Quality Management Systems) despite complexity.

Nick P January 29, 2013 12:51 PM

Complexity is an enemy to security. We have all discussed this plenty on the blog. Bruce’s post was a start. Years on, we’ve discussed how to manage complexity of system development and put more predictability into software methodology. I’ve often cited research and actual production use of such technology. So, what to say at this point in response to comments?

First off, congrats to Vles for beating me to DJB’s excellent paper. 😉 The easiest rules of thumb to learn from it are “less code” in general, “less trusted code,” and isolation of problematic code. The latter two are old concepts going way back to pre-Orange Book days. You might say DJB was the first to apply them to an open source project with long-term usefulness.

What of system complexity, though? How do we deal with the problems? Another link someone posted was of the opinion that all systems are in a degraded mode just waiting for a catastrophe and root cause analysis isn’t the right way to go. I think that might be true for many systems. I disagree with it as a maxim. I’m not the only one either. Here’s why: humans have made plenty of complex hardware and software products that did their job precisely without major failure for years on end.

So, why do some systems achieve these good results and others don’t? It’s more than methodology. There’s plenty of domain knowledge required for many of these systems. Others might push components or technology in unknown directions. Some are mostly static, while some change plenty. So, what does it take to make a complex system that doesn’t fail?

A Japanese writer on IEEE/ACM gave us a clue. He gave a chronological account of how the Japanese train system got from trouble to highly assured operation. One common trend was understanding the operating environment, the failure modes of components (e.g. trains, passengers), and probable system wide failures. The more we understood about problems in the domain, the more easily we could build systems that survive them. LOCK project noted this as “theories of security come from theories of insecurity.”

This all sounds like common sense but you will see all too often risky projects go forward without a thorough hazard or security analysis. The safety-critical and risk management communities have many methodical ways of doing such analyses. SQUARE is a recent one for the system/software community. Designing a survivable system means anticipating problems, general and specific, that might lead to catastrophe. Then dealing with them somehow.

I’ve repeatedly discussed how to handle requirements, design, implementation, testing, and integration. I refer people to previous discussions on this blog for those.

User interactions can make or break the security of a system. Two common ways of dealing with this: the system restricts their choices by technical means; the organization restricts choices via policy and/or provides recommended usage guidelines. Real systems are usually a combination of these two. So, which should we lean toward?

Well, a decade of IT indicates the pessimists were right: users will destroy themselves if given the option to. This might be the result of ignorance, apathy or malice. Restricting access to critical control code, recoverability, sanity checks on user input, and safeguards built into automated processes are all good ideas. It also pays to assume automated control might glitch and do something ridiculous. Countermeasures might include putting a human in the loop to apply common sense to control decisions or using a simpler system to do this. Such a countermeasure has saved at least one bank from an access control system gone wild.

Some readers might say, “Well, that’s nice for the problems code can solve, but what of the others?” That’s where good practices and policy come in. Let’s talk practices first.

Good practices will ensure the system operates smoothly. Good practices [ideally] won’t introduce errant behavior into the system. How to create these? I think these aspects of the requirements are best created by having a good group brainstorm, analyse and double-check each other. The group should be a mix of users, domain experts, system designers, and maybe a security expert. Including users increases their feeling of ownership of the system and helps them understand why they do the best practices. Justifications for these should be written down so the knowledge can be transferred later to new hires. This has an added benefit later on in that business heuristics aren’t opaquely included in the legacy system itself, making it hard to replace.

Now for policy. I’m focusing now on the issue Bruce mentioned: complicated policy. It’s obvious that complex policy is hard to follow and possibly ambiguous. Old Orange Book guidelines required formal, precise security policy for high assurance systems and proof they were embedded in requirements. The takeaway that it might help to apply system engineering principles to policy itself: make it unambiguous; check it for inconsistency when developed and changed; decompose it into smaller pieces maybe module-by-module; include regular review steps that incorporate feedback from users and system admins.

I’ll probably write something on the “nature” angle later on. Hope you enjoy this for now.

Jason Richardson-White January 29, 2013 1:13 PM

@Nick,
It’s worth noting that this is precisely the position of one of the US’s most prestigious dept. of Sys Engineering — the Engineering Systems division at MIT. They support making policy “variables” part of the total design process for complex systems. In fact, they have argued that this is what makes their dept different from others.

Alex January 29, 2013 1:29 PM

“…we have fallen into the trap of bolting on more and more security layers and policies.”

Whether computer security or DHS/TSA, this statement is soooo true. People need to stop thinking about individual threats and scenarios and focus on the whole picture. Using scenarios to examine current security is a good exercise, but you can’t possibly think of every single situation. Security is the WHOLE picture, not just individual rules. For as good as today’s automated intrusion detection systems (and TSA) are today, they’re just following a list of rules with no thought. Attackers (I refuse to use the T word anymore) are aware of this and the rules, even if the rules are constantly changing, and will easily find a weakness. Come up with something new/unique and you’re almost guaranteed to get in. Even the same scam but just dressed slightly differently will work.

@atsacryl’s: You’re spot-on. The more complexity, the more variables are at play. I remember the MS-DOS days back when I hand-coded most things rather than rely upon outside vendors, APIs and SDKs. I remember being awakened at 3am because a new piece of code went down. Half-asleep, I walked the employee through changing the offending code, re-compiling, and resuming the services, all over the phone. Now I wouldn’t dream of doing that with most of the programs running in our office. I’d want a mem dump/error log at the very least these days. Even then, I usually have to kick the problem to the vendor whose code is causing the problem.

@Spencer: Nice idea, but in practice nature fails every day. Think of individual human beings as applications/servers and their immune systems are firewalls. How many people in your workplace became ill last month? Last 60 days? Past year? If that % of applications/servers had been bypassed, would your IT people still have jobs? If so, where do you work and do they pay well — I’d like to get paid to do nothing all day.

Alex January 29, 2013 1:40 PM

@Nick: Complexity CAN be on your side, as long as you’re aware of everything that’s going into it. IE: a man-trap door setup is far more secure than a single conventional door. However, a man-trap door that uses a keycard for security is more complex BUT also now has a new vulnerability: Did the installer leave the Test Mode cards enabled? What other cards are enabled?

I was once on a property where test proximity cards were still enabled on the system 2+ years after the property opened. What constituted a test card? Any unprogrammed one. Literally, if you could get ahold of a blank card, or erase one, you had a master key to the entire property. Are proximity locks more secure than mechanical keys? They thought they were.

ARS January 29, 2013 1:42 PM

I feel this article is appropriate for the topic:

http://io9.com/5897134/researchers-describe-a-new-evolutionary-theory-the-black-queen-hypothesis

From what I gleaned (I’m not a biologist), there’s an advantage to eliminating complexity in a system X, as long as there’s someone else providing the results of X and can afford to sacrifice some of that output. It’s non-parasitic, in the sense that the output would have gone to waste anyway.

Relevant citation:

[The Black Queen Hypothesis] refers to a playing card, in this case the queen of spades in the game Hearts. In Hearts the goal is to score as few points as possible. The queen of spades, however, is worth as many points as all other cards combined, and therefore a central goal of the game is to not be the player that ends up with that card.

In the context of evolution, the BQH posits that certain genes, or more broadly, biological functions, are analogous to the queen of spades. Such functions are costly and therefore undesirable, leading to a selective advantage for organisms that stop performing them. At the same time, the function must provide an indispensable public good, necessitating its retention by at least a subset of the individuals in the community — after all, one cannot play Hearts without a queen of spades.

— End Cite

I would say at first glance that OpenID was a first attempt in computer systems to reduce the need for different websites to require you to set up an account for each. Of course, that makes the security of the OpenID provider paramount.

AD January 29, 2013 10:34 PM

RESIST COMPLEXITYTM

Many information systems and business processes are products of design-by-committee, a process which produces inelegant and complex designs.

The simplest solutions are often based on conceptual models, but the process of “selling” a conceptual model to a committee can quickly kill it. Committees tend to prefer specialization over generalization, and in so doing they fail to recognize where complexity can be trimmed away from their organization. The product is a Frankenstein of micro-solutions with glue holding the ill-fitting pieces together.

Dave January 30, 2013 1:40 AM

Instead of producing a matching password plus access token plus fingerprint and iris scan,
you immediately walk unobstructed toward the coveted prize except you are shadowed by an
eighteen year old with a M-16. If at any moment you are not who you’re supposed to be,
you get shot.

Realizing that having random people shot by nervous recruits wouldn’t look too good, the powers that be have addressed the problem by not issuing them any ammunition.

Gweihir January 30, 2013 5:55 AM

@hxx: The Linux kernel is not that complex, most LOCs are driver code. Here you mainly want to do without most drivers, and that is one of the few areas where virtualization actually helps security: It simplifies the driver model.

We have finished creation of a secure (virtualized) browsing environment based on SELinux last year, and it turns out that the main complexity comes from stupid software like Acrobat reader that most people want/need, but that does do really bad things like code execution on the stack, hence preventing a non-executable stack. Still, SELinux allows you to only allow this stupidity for Acrobat Reader and to prevent it in turn from writing anything except its own configuration files in its own configuration directory.

Jason Richardson-White January 30, 2013 1:22 PM

@Sven,
To be exposed so swiftly as a “bleeding heart”. I should never post here.

Nonetheless, I maintain that the principles which I elicited in that other link do apply to security (in Bruce’s sense), though less well. Complexity will not degrade a security system for (say) a bank nearly so quickly as the “security system” for the entire world (which is closer to the point of that other link). However, I can concede this point and still argue that complexity for (say) the US government’s internet presence is subject to at least some of the considerations of my proposed list.

@Cliff,
I do intend to say more. I have said a bit more in other places. (But if I say anything more, Sven will be sure to out me.) Eventually, I want to write at book length on the general strategy of deliberately contracting the world economy and human population, simultaneously, in order to bring us back from the brink of too much complexity.

But I’m not there yet. Right now I’m reading Liars and Outliers (my wife got it for me for the holidays).

Thanks for your point about the “physics” of complexity. I’m not sure that the analogy works, because complexity is not a homogeneous phenomenon. You talk of some parts not knowing of others, etc. But this makes each part an agent. In situations of high complexity, a major difficulty is in creating reasonable interfaces between entities at radically different scales. Some might be “rational” (in the loosest sense of the word), some might not. Still, I like your idea and I will think about it.

Thanks to Bruce for permitting this side-chatter. Security of the whole world is presumably not a central topic for this blog.

b January 30, 2013 5:56 PM

Of course there were mind-bogglingly complicated financial instruments and corresponding bets, but LIBOR wasn’t one of them.It simply asked of a group of selected banks, “At what rate would you be able to borrow USD for any particular time period?” The LBA would then discard the top and bottom rate quotes, and average the rest. This rate would be set at a particular time.(I think 10.am) once per day and that was that. Your loan margin say 3%, would then be added to the LIBOR rate, and you’d be sent a bill.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.