"Taxonomy of Operational Cyber Security Risks"

I’m a big fan of taxonomies, and this—from Carnegie Mellon—seems like a useful one:

The taxonomy of operational cyber security risks, summarized in Table 1 and detailed in this section, is structured around a hierarchy of classes, subclasses, and elements. The taxonomy has four main classes:

  • actions of people—action, or lack of action, taken by people either deliberately or accidentally that impact cyber security
  • systems and technology failures—failure of hardware, software, and information systems
  • failed internal processes—problems in the internal business processes that impact the ability to implement, manage, and sustain cyber security, such as process design, execution, and control
  • external events—issues often outside the control of the organization, such as disasters, legal issues, business issues, and service provider dependencies

Each of these four classes is further decomposed into subclasses, and each subclass is described by its elements.

Posted on August 10, 2011 at 6:39 AM14 Comments

Comments

Alex August 10, 2011 7:12 AM

Just a heads up, VERIS is a nice, comprehensive alternative and comes with a sweet data set of 1700 incidents. Also, a free data sharing website.

Just sayin’

Dave August 10, 2011 8:24 AM

Assuming you’re not talking about Veris, Veris, Veris or Veris I guess you are suggesting Veris.

Veris is a method of categorising security incidents. Bruce’s post is about a taxonomy of security risks.

I’m sure it’s wonderful, but it’s not the same thing.

Clive Robinson August 10, 2011 8:32 AM

It’s clearly been done by “lets chuck a lot of ideas into a pot tip it out and group them up”.

As a result there is a great deal lacking with it, firstly it gives the overall impression of being “of an instant in time” at a high level and then trying to cover some life cycle issues at lower levels…

For instance in the “lacking” area 1 is broken into three areas accidental deliberate and inaction, one problem being “inaction” is actually a sub set of “deliberate” and likewise “accidental” is a subset of “inaction”. That is you deliberatly chose not to implement a preventative measure and as a consiquence an event happens. Splitting them out defies the recognised scientific principle of cause and effect, and thus leaves a number of gaping holes in the process.

Likewise with area 2 hardware software and systems, why is design only in systems? test only in software and obsolescence only in hardware.

I could go on, but this work looks like what you would expect as a first step from an undergraduate class when asked to come up with a risks based disaster recovery plan.

karrde August 10, 2011 9:21 AM

Whatever the shortcoming of this report, I wish the managers at my company had at least this level of understanding.

As background: there is an inside-the-company story that a Class 1.2.3 (Action of Person, Deliberate, Theft) event occurred. Reputedly, someone with a laptop full of corporate documents took the laptop on a business trip to another country, offered the laptop and its data to a different company in that other country, and disappeared.

In response, our company instigated a set of measures. The measures included software which would (a) encrypt-on-disk the contents of “My Documents” folder, and (b) forbid writing data to a USB drive unless the user has access to a Management-controlled USB-key.

This software, and associated process, may solve problems from Class 1.1 and Class 1.3. But it will have no effect on Class 1.2 problems.

Why?

Because the on-disk encryption is limited to specific folders, and the USB-drive-locking system is controlled in the OS. Anyone with physical access to the machine can boot it from removable media, with an alternate OS that he controls.

Thus, Class 1.2.3 problems (deliberate actions to steal data) are not circumvented by this software-based solution.

However, Class 1.2 problems (deliberate actions by people with inside knowledge) are very hard to circumvent by any internal process or policy.

anonymous August 10, 2011 10:53 AM

The CERT paper should include a taxonomy of risks to various asset classes that can be impacted by cyber attack for example government, infrastructure, public, private, etc. Of course, the main missing information is concrete material on prevention and mitigation.

Petréa Mitchell August 10, 2011 12:03 PM

The one glaring omission, to my eye, is a category for deliberate but non-malicious acts by employees, like leaving a door propped open for a period where a lot of things are being moved into or out of the space that the door protects. It’s not an accident; it’s not something someone forgot to do in haste; it’s not a lack of knowledge, skills, or anything like that.

I’d call this category “Disbelief”. As in, actions performed by someone who knows they are against the security policy, but whose personal assesment is that they will not, in fact, cause harm to the company.

Porter August 10, 2011 12:58 PM

@Dave, at the root of VERIS is a risk taxonomy (the A4 Model). Agent commits an Action against an Asset affecting an Attribute.

VERIS captures the threat agent, vulnerability exploited by the threat agent, as well as how the asset was impacted.

If you look at the elements of each framework, there’s quite bit of overlap, but I think that VERIS allows for more detail. For instance CERT’s:

1.1 Actions of People (Inadvertant) could be classified in VERIS very easily as Internal Error (error’s made by internal agents).

1.2 Actions of People (Deliberate) could be classified as External or Internal agents with actions of Physical.Theft or Physical.Sabotage.

2.1.1 Systems and Tech Failures (Hardware) would be External.Capacity overload or Maintenance Error, or Technical / System Malfunction.

4.1 External Events (Disasters) would be covered in many of the External.Environmental threats.

I think you get the gist.

VERIS also captures Failed Internal Process information within the Discovery and Mitigation and Impact sections.

Take another look, I think you will be surprised how well it can be used for one of the tools of risk management.

tommy August 10, 2011 7:15 PM

@ Godel Fishbreath:

O/T posts are now supposed to be part of Friday’s Squid Blog thread, but until then, consider this:

Memorizing one phrase might be easy. I presently have 59 sets of user/pass. Usually, the username is not similar to your own, or your account number, etc. Who could remember all of those phrases?

Alternative: Password Safe, whose encryption was designed by somebody whose pic is on this page somewhere. Then you need remember only one master pw. I prefer an acronym to dictionary words, but that’s just MHO. YMMV.

@ Moderator:

Please feel free to move Godel Fishbreath’s comment and my response either to last Friday’s squid, or this Friday’s, or both, if you like. The cartoon /was/ timely, and I’m sure a lot of readers here will have read it also. Thanks.

Tony H. August 10, 2011 7:36 PM

I’m with Clive on this one. I mean, it looks good, and I certainly don’t dismiss the whole idea, but is this particular scheme really useful? I think one test is to take some real incidents, plug them in, and see if it’s clear where they fit. Fair enough that a real incident is complex, and may involve multiple paths, but if it just isn’t clear where to put things…

So take the widespread power failure that occured in parts of eastern North America eight years ago this weekend. Outages varied from an hour or so to several days. During this time, mobile phone service died in many places because cell sites had battery backup good for a few hours at most. There weren’t enough portable generators to go around, so large areas were left without any coverage.

So where to put this, as seen from the point of view of a mobile operator’s analysis and planning? Seems to me it needs a checkmark in many boxes, and I think that doesn’t really lead to anything useful.

We can pretty much rule out 1.2 People.Deliberate, but all the rest of 1 People is probable. Under 2 Systems and Technology Failures most of 2.1 Hardware applies. Most of 2.2 Software probably escapes this one (though Wikipedia claims there was a race condition in power plant monitoring software), but all of 2.3 Systems applies. Much of the entire class 3 Failed Internal Processes fits. Under class 4 External Events, I note they use both Disasters and Hazards for the 4.1 heading (just an editing glitch?), but 4.1 is the only part that doesn’t apply. Certainly much of 4.2 Legal Issues, 4.3 Business Issues, and most of all 4.4 Service Dependencies all apply.

So really, what do we learn from this? About 90% of all bottom level elements would seem to apply to this actual situation. Are we further ahead? Would some kind of orthogonal taxonomy provide a better or worse idea of what to do, or is it just arbitrary grouping, as Clive says?

And there’s another aspect of this grouping that bothers me – there is no clear distinction of cause and effect. Pretty obviously 4.1.4 Earthquake is a cause, but which side does e.g. 4.2.3 Litigation lie on? A supplier can sue you and trigger other events, or the suit can come after the main event because of your lack of preparation. There’s something sloppy about having these in one place.

Bill F. August 10, 2011 9:27 PM

It would appear that there are multiple categorizations at even the first level of this “taxonomy”.. e.g. an insider that took action in violation of a process would fall into more than one of the classes…

It would appear that a more separable taxonomy might start with:

  1. People
    1.1 action
    1.1.1 insider
    1.1.1.1 deliberate
    1.1.1.2 accidentally
    1.1.2 outsider
    1.1.1.deliberate
    1.1.2 accidentally
    1.2 inaction
    1.2.1 insider
    1.2.1.1 process failure
    1.2.1.1.1 lack of process
    1.2.1.1.2 failure to follow process
    1.2.1.1.3 incorrect execution of process
  2. Technological
    2.1 Software
    2.2 Hardware
    2.3 Hardware/Software combination

David August 10, 2011 9:58 PM

The overall structure is very similar to that used by the authors of “Military Misfortune”, an excellent book on why and how even the best military organizations in history get things horribly wrong.

In that book, the taxonomy used was couched as “failure to adapt”, “failure to recognize”, etc. I would suggest it for any serious security professional; though the examples are all based on war operations, the concepts translate well into getting a handle on security threats.

David.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.