Security Orchestration and Incident Response

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­—sometimes with the addition of machine learning or other artificial intelligence techniques ­—and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them—­ security orchestration, not automation.

This isn’t just a choice of words ­—it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­—now part of IBM Security—­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model—­ the dashboards, the reports, the collaboration—­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

Posted on March 29, 2017 at 6:16 AM59 Comments

Comments

Ergo Sum March 29, 2017 7:57 AM

I don’t disagree, but…

I my experience the SIEM solutions are pretty good about the known threats, while the unknown ones can go undetected. This is probably due to how the app coded…

The other aspect is the qualified security analysts to manage the SIEM are far and few between. Most companies do buy a SIEM solution for lot of money, but they fail to fund the the necessary resources for managing the SIEM. But hey, having the SIEM in place awes the auditors and gets them off your back most of the times… 🙂

vas pup March 29, 2017 8:53 AM

Question:
Where is the spot for AI in that suggested orchestration which could be faster in situation of uncertainty than human?

Wes Reynolds March 29, 2017 9:33 AM

Bruce, this is excellent. Thank you for so articulately pointing out the need for decentralization and initiative (command) in IR instead of centralized control.

Thunderbird March 29, 2017 10:21 AM


You can only automate what you’re certain about, and […] when an uncertain process is automated, the results can be dangerous.

If I were younger, I’d have this tattooed on a prominent body part–possibly onto several parts.

Note that this is a movie blurb quote: the central brackets contain most of the essay, which by the way was great. It’s just that this captures something that I often feel, but hadn’t really been able to express. I think it is frequently overlooked.

FMJohnson March 29, 2017 10:36 AM

I really enjoyed this post but had a question. You linked the reference to OODA Loop and “airplane dogfights” to an article about the mathematician Abraham Wald and his counter-intuitive conclusions about armoring aircraft during WWII.

It’s a great story, and while the OODA concept might have been part of Wald’s thought process, that story pre-dates the formulation of the OODA Loop concept itself.

Would an article about USAF Col. John Boyd, the fighter pilot who developed the OODA Loop based in part on his observations of air combat during the Korean War, be a better link for that passage?

Rob March 29, 2017 12:58 PM

I think what you are trying to convey here is a current state of awareness that is representative of many complex systems of our time; markets, medicine, security, weather, behavior, conflict, etc. The observer effect and its influence on algorithm development is the key to the next level of complexity. We can develop an algorithm for the defense of network A or human body A but that algorithm will be specialized to A. Further, the observer influences the algorithm and thus cannot be left out of the equation as the observer is the producer and digester.

The story is that we are increasing our dependence in proportion to our co-creation of complexity. Sufficient complexity now requires algorithms to perceive, “observe” reality as “certain”. Thus, as you allude to, the story is not when AI will replace the human, it is our awareness of the proportional increase in our dependence due to expanding complexity. There will be a limit to the trend of complexity shared in consciousness and physics after which point complexity will subside.

Slime Mold with Mustard March 29, 2017 1:07 PM

Bruce at his best (by which I mean we’re thinking alike).

My tattoo:

“When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important” (I may need to gain weight).

Applies to everything from resistance movements to evolution.

WhiskersInMenlo March 29, 2017 1:24 PM

Interesting: “Their promise was to replace people with computers ­– sometimes with the addition of machine learning or other artificial intelligence techniques ­– and to respond to attacks at computer speeds.”

So how are the computers and networks involved in this protected?
Machine learning is not something a small network or small company can support.
Major attacks are being vectored from silly little devices in the class of IOT and this does not fix that.

Machines will be part of the solution but they are not “the solution”.

My Info March 29, 2017 2:36 PM

Re: OP

Eventually, computers will become intelligent enough to replace people at real-time incident response.

Re: @WhiskersInMenlo

Machines will be part of the solution but they are not “the solution”.

Actually they did that in the 1930s in Germany. IBM’s Hollerith punched card tabulating machines for the Jew census. Now we’re having another “census” like that in the U.S. come 2020.

In other news, Britain is finally serving the German occupation a formal eviction notice.

Clive Robinson March 29, 2017 2:53 PM

@ Bruce,

    He confirmed that military strategists mistakenly believed that data would give them certainty.

Information theory in effect says they were wrong.

There are two types of system you can observe, a “bounded system” and an “unbounded system”.

As you observe either system and obtain more bits of information about it, the observed entropy starts to rise thus the uncertainty. Only with a bound system do you get a tipping point where as you increase the bits of information does the entropy or uncertainty eventually start to drop.

To see why go back to the old “urn of balls” thought model as a starting point. All you know is that the balls are red or black and there is an unknown quantity of both in the urn. As you take balls out you record their colour. Then either discard them or put them back in the urn. You can quickly see that when you discard the balls the knowledge about the ratio of black to red balls becomes less uncertain as the urn empties. Now imagine that the urn has some machinery attached that every time you take a ball out and disgard it, the machine selects a red or black ball from hoppers you can not see and droppeds it in the urn again unseen by you. Ask yourself what you can actually determine and with how much certainty?

My Info March 29, 2017 3:17 PM

@Clive Robinson

Consider Pólya’s Urn.

Initially the urn is filled with an equal number of white and black balls.

At each step, a ball is drawn at random, replaced, and an additional ball of the same color is placed into the urn.

Eventually, with probability one, the urn comes to contain a majority of balls of one color, and that color remains constant for eternity.

Is it almost certain, then, that if the process continues, at some point in time, no ball of the minority color will ever be drawn again?

Clive Robinson March 29, 2017 3:46 PM

@ My Info,

Consider Pólya’s Urn.

That is a “degenerate” non independent form of the experiment. It’s easy to see that the probability shifts based on previous drawings.

That is if you have just one black ball and one white ball, after the first drawing there will be three balls two of which will be the same as the first drawing thus you have 2:1 odds of the same colour being drawn on the second drawing. Thus after the second draw you have either 3:1 of the colour of the first drawing with a 2:1 probability or a 2:2 with a 1:2 probability with twice the probability of it being the same colour, it becomes obvious this behaviour is more likely to predominate even with a large number of balls in the urn initialy.

It’s why you have to use an “apparently random” replacment not a “replace with two of a kind”.

John Hudson March 29, 2017 4:03 PM

I was taught that information=data+meaning. There is a limit to the ‘meanings’ which an automated system can apply to data to generate useful information. So there is a limit to the range of information which an automated system can generate from any data set.

JPA March 29, 2017 4:20 PM

I’ve seen this delusion of certainty afflict my field of medicine, with similarly destructive consequences.

@Clive
Appreciate your points as usual. One could also consider that in a system with highly non-linear processes then small errors in the data obtained will lead to large and unpredictable errors in the predicted behavior of the system. Also most real-world systems are non-stationary, so the underlying processes are changing over time but most analysis tools assume stationarity.

Lawrence D’Oliveiro March 29, 2017 4:44 PM

→FMJohnson

You are right. I, too, was disappointed by his choice of link. Not to mention the article linked to was really stretching the analogy, by ignoring the point about which parts were hit and which were not.

Clive Robinson March 29, 2017 4:48 PM

@ John Hudson,

I was taught that information=data+meaning.

It depends on who is doing the defining. Some would say that is “Knowledge” not “informatiom” and others that “data” is information that has been fixed in value in some way.

Overly briefly, my own point of view is that “information intangibly exists. In the tangible universe” however to “Store, communicate or processes” it you have to impress it onto a tangible form in some way[1] and thus have made it “data” to which “meta-data gives a measure of meaning to data” and that “knowledge is in effect meta-meta-data”.

[1] A consequence of this is that tangible data is a subset of intangible information, which obviously means that knowledge is a subset of data, thus there are distinct limits on knowledge but not of necessity information.

My Info March 29, 2017 5:16 PM

Re: census

This Is Why The Erasing of LGBT Americans On The 2020 Census Matters

http://www.thedailybeast.com/articles/2017/03/29/this-is-why-the-erasing-of-lgbt-americans-on-the-2020-census-matters.html

When we talk about computer intelligence replacing human intelligence, we are really talking about such a mass slaughter of said humans as occurred in Germany in the 1930s. This is nothing new.

This is nothing but a manifestation of the Nazis’ interpretation of Nietsche’s Übermensch.

There is nothing new under the sun.

Jimbo March 29, 2017 5:21 PM

And what happens when the malware infects the computer running the AI and security incident response automation? The malware will allow itself to be white listed just as root kits override the O.S.

Clive Robinson March 29, 2017 5:27 PM

@ JPA,

Also most real-world systems are non-stationary, so the underlying processes are changing over time but most analysis tools assume stationarity.

Thermodynamic entropy –probably the most fundemental physical law we have– indeed says that underlying processes must change, and on mass[1] in a given direction.

However connecting up the equivalent of what happens in a hot cup of tea to large scale processes in many peoples minds is a tads difficult. The thing is there is an interesting effect that you do see in the real world that does show this,

Imagine you have to run through a crowd of people walking in random directions… It can be shown that as long as your effective velocity is twice that of the fastest walking person you can in effect assume that they are stationary for the length of time it takes you to pass them[2]. In effect you do the opposite of what a film does in that you take a real continuous movment and view it as a series of rapid snap shots. For that brief period you decide your next directional move. Thus you are in effect “sampling” and making a simple comparison rather than trying to calculate the other peoples velocity and direction.

The trouble you’ve observed starts happening when your velocity is not significantly higher or even less than the other people… Not understanding this is what gets people tripping up both literally and figuratively.

[1] But as others have noted in the past it is in effect a statistical process, which thus alows for localised anomalies.

[2] There are a few assumptions such as that of the size and thinking distance being negligable.

Bob Dy;an's Itchy Palm March 29, 2017 6:06 PM

@Bruce

“Uncertainty demands initiative, while certainty demands synchronization.”

How does this square with your oft repeated complaint about the logic of “something must be done, this is some thing, so let’s do it.”? Viewed in that light uncertainty doesn’t demand initiative, uncertainty becomes a rationalization for initiative. We don’t know what the hell we are doing, we don’t know what the hell the consequences will be but inaction is better than inaction so HOO RAH.

Z.Lozinski March 29, 2017 6:22 PM

Going back to some of Bruce’s thoughts about RMA and Boyd’s OODA loop for a moment.

I found watching German Army mechanised infantry training films from WW2 instructive. Junior NCOs were trained to put a lot of effort into planning for an operation. It was clear that the trainers knew that any plan would not survive contact with the enemy, but that the very process of thinking through the options, and planning the responses, and implementing them meant that the NCOs understood they were expected to take responsibility and innovate against the plan when circumstances changed. Contrast the centralised doctrine of the Red Army.

Now, back to modern cybersecurity. Yes there are areas where there is experience of threats which probably can be automated: DDoS? But when a new attack or new threat shows up, we need the people in the frontline who are skilled and willing to take action.

One of the implications, is that we need better operational training around dealing with incident response. I don’t mean the multiple guess nonsense, but properly constructed exercises. Again, the military have been doing this for around 250 years, with kriegspiel, staff college exercises, war games, and National Command exercises. Can AI systems like Watson help here, by generating a realistic threat environment for training ? [Disclosure: I also work for IBM. Other AI/ML systems are available.]

Dirk Praet March 29, 2017 6:37 PM

@ ab praeceptis

This Is Why The Erasing of LGBT Americans On The 2020 Census Matters

I suggest if ever you pass by here, you give me a nudge so we can retaliate by imposing a discussion about OODA loops and GOST algorithms on patrons of the local LGBTQ bar 😎

ab praeceptis March 29, 2017 7:02 PM

Dirk Praet

You evil man, hahaha.

But seriously, I wouldn’t like that. I’m not at all interested in angering lgbt, transsexuals or whatever (and we might even fail as, so I presume, quite some of them might be quite interested in IT security).

I’m not even against the occasional shrieking of them; seems to be part of it. What I do not accept, however, is merciless, ignorant, and obtrusive shrieking again and again and again, hand in hand with utter disrespect for our discussions.

I’m probably like Putin in that. I respect their sexuality and kinks and I do certainly not want them persecuted or disadvantaged – but – neither do I want them to make too much noise or going public about it.

And I find it funny, if in a sad way, how they don’t understand that they themselves create the very basis for being hunted. Unnerving people again and again with something they don’t care a rats ass about drives them angry and sooner or later they will come up with nasty ideas …

But then, it seems to be a general illness of modern “democracy” (à la americaine) to dissolve all boundaries and to let “group XYZ shall not be persecuted” turn into “we must make efforts to ensure the rights of group XYZ” and then finally into “it is of ultimate importance that group XYZ feels that each and every no matter how weird demand of group XYZ is met”.

And now I stop as I respect our hosts request to stay away from politics unless it’s in a concrete relation to security.

Mark March 29, 2017 7:33 PM

Bruce, how on earth did ab praeceptis’s bile get through your new censorship regime?

Likewise, your American obsession with war isn’t helpful. You’ve been ticked into allowing yourself to normalise war, to attempt to learn something from what is an utter cancer upon American-led society.

John Galt March 29, 2017 10:17 PM

@ Schneier

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory. ]]]

CORRECTION: THE US LEARNED IN AFGHANISTAN THAT AFGHANISTAN DIDN’T HAVE ANY CELLPHONE TOWERS.

Especially during 2010-2012, “insurgents” (as if the US had a right to call them “insurgents” — as opposed to “enemy”)… Were destroying new cellphone towers because they figured out that the cellphones were used to “automate drone strikes”…

Sad, but true.

The entire cyberwarfare infrustructure is pointed at YOU… not the Taliban. And, not foreign terrorists. Period.

John Galt March 29, 2017 11:16 PM

[[[ When we talk about computer intelligence replacing human intelligence, we are really talking about such a mass slaughter of said humans as occurred in Germany in the 1930s. This is nothing new. ]]]

Facebook + Google + NSA == Dream Come True — for Hitler, Mao, Stalin, Pol Pot, Bonaparte, Genghis Khan and even King George and the British East India Company (and an entire host of other Tyrants throughout history).

Today, Anne Frank wouldn’t survive 24 hours.

The Forbin Project.

RonK March 30, 2017 12:28 AM

“when an uncertain process is automated, the results can be dangerous”

Anyone conversing with their SO while multitasking knows this is so, so, true.

(@Thunderbird : thanks for drawing my attention to that quote)

tyr March 30, 2017 12:56 AM

One of the recurrent problems in any field
is the ‘one size fits all’ problem. Once a
so-called solution exists to any problem
whatever was done in the past gets used as
the best fix for the next problem.

OODA loops are great if you are a fighter
pilot, deciding that they are ‘the’ answer
to all military problems opens you to a
massive string of failures without any chance
of a success. You can see this with the idea
of building lots of aircraft carriers because
they were so useful in WW2. They don’t work
against a space based adversary.

Think about how the current surveillance works.
Every time there is an incident they find that
the person has been on the radar of the system
sometimes for years. All of that collection is
wasted if it can’t look ahead. If we could build
the machine to predict the future you’d be living
in a science fiction world right now.

When doing predictive work on complex related
problems you have to pattern them correctly or
you have a false picture built into what is an
aid to understanding.

One example of this is the drone weapon of choice
for anti personnel usage. Salesmen had a large
inventory built up for the cold war turning hot.
Hellfires will kill people but they were developed
to kill Main Battle Tanks. Some fool decided that
the incidental collateral damage to the entire area
was acceptable. The result is endless wars that can
not be won by military methods. Very clever, defense
contract salesmen are not about to interfere with
such a wonderful plan because it might impact the
bottom line. In the meantime humans have moved on
to 4GW and you see signs of 5GW thinking among the
so called enemy.

The conflations of data with information, and intelligence
are quite humourous if the subject wasn’t so serious.

Wael March 30, 2017 1:49 AM

@Bruce,

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important.

And…

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning.

Are these universal truths? No exceptions? I’ll have to give this more thought. As for replacing humans, it’s often a marketing spiel.

Clive Robinson March 30, 2017 2:31 AM

@ Wael,

As for replacing humans, it’s often a marketing spiel.

If something is prescriptive then the human can be either augmented or replaced. We have know this since the days of using animals to pull ploughs and wind and water mills to grind grains and pump water.

Where humans still have advantages is pattern matching and intuition. I could design an add on for an IDS that will spot changes in “usage signature” much faster than a human which would be the prescriptive part. Where the human comes in is recognising the difference between noise and signals in the human domain and working it forward. Bruce has called it “thinking hinky” for many years, other people put it down to “gut feelings” etc.

As far as I’m aware we have not yet been able to teach machines to “learn” thus teach themselves, when we do we had better hope the machines will like to “keep pets”.

Wael March 30, 2017 4:04 AM

@Clive Robinson,

Agreed! Machines don’t have gut feelings and they can’t think. They can calculate and detect patterns. Captchas still confuse them.

As for replacing humans… sometimes the replaced humans are tasked with maintaining the solution instead.

Katherine Lam March 30, 2017 4:59 AM

Has anyone hooked an automation tool like Jenkins with a SIEM to automate some of the tasks in responding to an attack? What are the most common responses?

I’m really curious.

Dirk Praet March 30, 2017 5:35 AM

@ Clive, @ Wael

Where the human comes in is recognising the difference between noise and signals in the human domain and working it forward.

Exactly. I have implemented tons of logging, auditing, monitoring, IDS and other systems that all failed miserably because either no one was following up on them, was able to interpret the data they generated or that were simply ignored or even shut down because of paralyzing numbers of false positives. What you also often see is that even when a problem is diagnosed correctly, there are no written policies or procedures for incident response, leaving it up to the poor IT guy to take actions he has no authority for and generally ends up taking all the heat for.

There’s way too many companies setting up SIEMs just to be compliant with whatever standard or regulation and led to believe by their respective vendors that they are some sort of stand-alone magic bullet that somehow will manage itself, irrespective of whether its a locally implemented or remote monitoring solution. They aren’t, and no degree of automation can change that. Implementing any kind of SIEM is not just a technical solution, but a company-wide effort that will always require a human element on several different levels.

Another element in the equation which I believe is just as important, is the mistaken belief by management that implementing some kind of haphazard automated solution somehow covers their own *sses when something goes horribly wrong, thinking they can just blame the product vendor, solution manager, IT guy or unnamed hax0r instead. I’ve seen it plenty of times, and that’s just not the way things work, whatever the company lawyer claims to the contrary.

Alex March 30, 2017 5:37 AM

Talking to the Google CRE team at NEXT’17, I had much the same impression although I’d put the distinction as being “human command and automated execution” by analogy with “centralised command and decentralised execution”. A big, big priority of theirs is “always have a button” – i.e. a stockpile of responses you can launch when you need to, in the knowledge it won’t fail due to a typo or such.

Clive Robinson March 30, 2017 6:20 AM

@ Dirk Praey, Wael and the usuall suspects,

You might find this articles from a Prof of Anthropology about why the likes of the paper pushing class is so numerous, and potentially usless,

http://evonomics.com/why-capitalism-creates-pointless-jobs-david-graeber/

Interestingly the Prof does not go into the similarities between them and “guard labour” and the relentless need of government inspired legislation to create vastly increasing amounts of “Red Tape” that induces a relentless need to create more paperwork, that is almost always of no use to anybody.

Further the Prof missed another point, 1973-4 was the time of greatest office productivity. Since then as ICT has risen productivity has dropped, not just in the administrative class but the skilled class, and with it actuall skill levels have fallen. Though the Prof does indirectly refrence it by way of meetings etc.

One point to note is that as the likes of MS Office has increased the ability of people to “beautify” their work, they spend increasing amounts of time being “Frustrated Creatives” worse, some studies have shown that senior administrative staff rate the value of any given information source not by the content but by how it conforms to some imposed format, you actually hear the “looks impressive” response to a book sized report into the mundane, that few can ever bare to spend the time reading. Thus administrative types have almost become thwarted if not frustrated authors… It might explain some of the venal behaviour, patronage cronyism and nepotism we frequently see, and why it rarely effects the actual “real work” carried out by an organisation.

Z.Lozinski March 30, 2017 7:32 AM

@Katherine Lam,

There are people today, who take the outputs of the SIEM tools and put them into a data lake. This is only 1/3rd of the solution. Then you have to implement some big data toolset to process the information. But before you can write the analytics, you have to know what you are looking for. Remember, organisations that focus on security events have teams dedicated to looking at what is happening in the wild and creating analytics to detect this behaviour. You have to replicate that expertise in-house. Finally you have to do something with the insight, and yes, you probably want the response to be automated.

Any scripting technology will do for remediation. (As an aside, I’m not sure Jenkins is the right tool for this automation. Jenkins is aimed at CI/CD – building and deploying code. Right now, it is more likely that you are going to be reconfiguring networks and firewall rules, than building and deploying code. Once Infrastructure-as-Code becomes commonplace that will change. Today, there are less than 10 organizations worldwide deploying Infra-as-Code at scale).

The real challenge is not the technology, but people. Where do you find people with the skills you need to create this type of SEIM solution? Your management need to be committed to building up (and retaining) teams with skills in vulnerability research, analytics development and development of automated remediation.

Slime Mold with Mustard March 30, 2017 8:23 AM

@ Clive

The linked article made some decent points, but the author really ought to have found out what a private equity CEO does before singling them out.

Also, school administrators are the frequent target of right-wing critics – mostly because in large US cities their number frequently approaches that of the teachers employed by the district, often with vague or ridiculous job descriptions. What really pisses off the right are the pension plans of both teachers and administrators. Most US cities are careening toward bankruptcy and this is the single largest (but not only) cause. Look up Wisconsin’s Act 10 if you want a glimpse (use both google and duck duck go – the difference in results is startling).

Test scores continue to fall. There are exceptions, invariably followed by the cheating scandal.

You made an excellent point about people producing book-length reports that ought to be 30 pages. Since we produce audit reports that may land in court, and need to be thorough. We have a summary listing methodology and findings, chapters listing specific problems, and a list of recommendations. 30 – 40 pages tops. Then we toss all the data into addenda. Very readable – to accountants and lawyers anyway. Our competitors send their copies to a printer, we just run them off a laser jet. I put a lot into hiring people who can write concisely.

Wael March 30, 2017 9:27 AM

@Dirk Praet, @Clive Robinson,

I have implemented tons of logging, auditing, monitoring, IDS and other systems that all failed miserably because either no one was following up on them, was able to interpret the data they generated or that were simply ignored or even shut down because of paralyzing numbers of false positives.

I see that as well. Very common. Automation in this area helps.

Wael March 30, 2017 10:04 AM

@Clive Robinson, @Dirk Praet,

ability of people to “beautify” their work, they spend increasing amounts of time being “Frustrated Creatives” worse, some studies have shown that senior administrative staff rate the value of any given information source not by the content but by how it conforms to some imposed format,

True! A while back one of my friends changed positions to a PMP. In the internal interview, his Japanese manager said: I don’t like BS. Don’t spend anytime making the presentation look nice. I care only about the content. Don’t be a used car salesman! I told my friend that his manager is a smart man.

One weekend (we were both working in the same office) I finished my work early Saturday and went to my friend’s cube to tell him let’s go downtown (SF) for a walk. I noticed that he spent close to 4 hours on a single slide! I told him it looks good enough, let’s go! Don’t you remember what your manager said? He said: but, but… Told him no buts; get off your butt and let’s go 🙂 I had to drag his butt off the cube!

Bob Dylan's Itchy Palm March 30, 2017 10:34 AM

Likewise, your American obsession with war isn’t helpful. You’ve been ticked into allowing yourself to normalise war, to attempt to learn something from what is an utter cancer upon American-led society

@Mark.

We are thinking along the same lines but I would describe the problem as the normalization of aggression, of which war is one expression. I wonder how much Bruce’s lifelong love of D&D and the importance an “initiative roll” plays in that game has influenced Bruce’s thinking.

Gary Stoneburner March 30, 2017 11:03 AM

You said “Data does not equal information, and information does not equal understanding”. Excellent point. Outside of cyber, intel has the concept of signal > data > information > intelligence. I suspect that there has been way too little thinking on how this applies to cyber defense and what in defense of the cyber battlespace equates to each of these 4 levels of “information”. And I have often seen what looks like data (e.g., an IP address) being provided where intelligence is needed. And improvements in information flow between defender and intel appear to be seeking information that can only result in better whack a mole and not in the ‘understanding’ needed for effective cyber defense operations. (Oh, and by the way, while the above is in ‘military’ speak, non-military organizations from civil federal agencies to businesses selling coffee and donuts are in a ‘battle’ with malicious adversaries seeking to cause or facilitate harm to them through the cyber “battlespace”.)

Bignose Dukakis in the Tank March 30, 2017 11:16 AM

An illustrative example of contingency under this restrictive tunnel-vision concept of incident response.

The US police state regards your universal human right to seek and obtain information as a threat. It personalizes the threat as an enemy, Assange, with the cookbook techniques used to demonize insubordinate heads of state. Unable to blow the threat up with a drone, the US attempts to physically contain and isolate the threat in its British satellite. US incident response, if not automated, is clearly programmed with standard militarist rituals that undermine US legitimacy, influence and sovereignty.

Now WGAD has rejected a final inadmissible US appeal of its decision 2015/54 regarding arbitrary detention of Julian Assange – arbitrary being a term of art meaning beyond illegal to the point of denying the idea of rule of law.

http://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=20961&LangID=E

The US CIA regime will make their blackmailed British pedos keep it up, of course, shitting on jus cogens in order to attack your right to seek and obtain information, as what’s left of US international standing dribbles away down Uncle Sam’s leg. US national dishonour and disgrace accelerates the increasing multipolarity that is obvious to Lavrov, Haass, and everybody with two working brain cells.

In continued self-destructive response to its lost influence, the US has now detached Britain from the EU to salvage a remnant of their original servile NATO bloc. And the US becomes more and more irrelevant to the outside world.

albert March 30, 2017 11:48 AM

@Mark,

The “American obsession with war” is a result of the corporate obsession with profits, most certainly not the will of the American people as a whole.

We are not the only country immersed in ‘cyber-warfare’. It’s a global problem. ‘Warfare’ may not be the best term, but it’s easy to understand by most people, regardless of where they live.

Having one’s bank account stolen or one’s business wiped out by hackers may not be existential threats, but they are certainly a close second.

BTW, Bruces ‘censorship scheme’ deletes comments -after- posting, which is as it should be.

. .. . .. — ….

John Galt March 30, 2017 12:56 PM

@ Bignose Dukakis in the Tank

[[[ The US police state regards your universal human right to seek and obtain information as a threat. ]]]

That’s the norm in slave-based societies. Keep them in debt, stupid, disorganized, and barefoot. And, keep it all disguised. Invent new words to confuse the slaves. Like “waterboarding” (not torture) and “Chinese Water Torture” (torture).

Blindfold them and force them to play pin-the-tail-on-the-donkey.

[[[ Now WGAD has rejected a final inadmissible US appeal of its decision 2015/54 regarding arbitrary detention of Julian Assange – arbitrary being a term of art meaning beyond illegal to the point of denying the idea of rule of law. ]]]

Throw him in the King’s dungeon. That’s all it is. Another example of “keeping it all disguised.”

The more things change, the more they stay the same.

It’s a wonderful world, isn’t it?

John Galt March 30, 2017 1:03 PM

@ Gary Stoneburner

(Oh, and by the way, while the above is in ‘military’ speak, non-military organizations from civil federal agencies to businesses selling coffee and donuts are in a ‘battle’ with malicious adversaries seeking to cause or facilitate harm to them through the cyber “battlespace”.)

It’s the plot from Starship Troopers — fighting the Bugs (aka the Boogie Man) on the Planet Klandathu on the other side of the galaxy.

We are on Planet P — wherever that is.

Frank March 30, 2017 5:58 PM

I noticed the words “talk about the future of Internet security” and it does concern most of us, who in modern world doesn’t deal with internet and who doesn’t deal with the weaknesses of traditional passwords that make the internet insecure : Prone to keylogging, wiretapping, peeking and phishing !

Don’t you hate the restrictions on your passwords : lowercase, uppercase, numbers and special characters ? There are so many restrictions that I can’t even remember my own passwords, maybe a hacker knows my passwords better than I do, what a joke !

But 21st century technology is finally sophisticated enough to fix the weaknesses of traditional passwords.

Watch this video for the latest breakthrough technology of an observer/interception-proof authentication and encryption system & method, it overcomes all the weaknesses of traditional password mentioned above, the video is as informative as entertaining ^_^ ! [ with some movie clips you will laugh ] : https://www.youtube.com/watch?v=518p2cIbynY

More info on : http://nmjava.com/gate

Justin March 30, 2017 7:35 PM

A Watson security app for Splunk would be awesome. I like the idea of Watson but I do not find value in QRadar; gone are the days of cornerstone vendors.

Nate March 30, 2017 8:29 PM

@Bob Dy;an’s Itchy Palm
“”Uncertainty demands initiative, while certainty demands synchronization.”

How does this square with your oft repeated complaint about the logic of “something must be done, this is some thing, so let’s do it.”?”

I assume that “initiative” here doesn’t mean “action” but “decentralised command”. Ie trusting your local incident responders to apply their own analysis and local knowledge common sense to the situation BEFORE acting, rather than immediately acting based on standardised orders or scripts made in Head Office, who maybe don’t understand the situation as well.

Russ White April 1, 2017 7:25 AM

The OODA loop, which I was taught during various training “things” in the USAF, has always seemed like a very useful way to approach security. The key point is that you are trying to provide information and available actions for the person making the decision. It needs to be combined with subsidiarity, though, which moves the decision to the person (not the machine!) closest to the available information. There is a strong tendency in our culture to believe that because data collection can be automated, and it’s storage and processing centralized, therefore all decision making should be centralized, as well. In other words, rather than saying “the information needed to make that decision is over there, so the person over there should make it,” we tend to say, “I don’t have the information I need, and I should be the one making the decision, so I am going to get the information I need to do so.” Whole lotta’ I’s in that sentence, about now there should be a robot with flailing arms saying “Warning, Will Robinson.”

I did a short series on the OODA loop and security from a network perspective that might add some useful information…

http://rule11.us/getting-inside-the-loop/
http://rule11.us/ooda-2/
http://rule11.us/decide/
http://rule11.us/act/

Jared Hall April 2, 2017 12:33 PM

@WhiskersInMenlo

I liked your post. However, I wish to point out that ML and AI are NOT the same and cannot be summarily dismissed with a such broad stroke. Machine Learning has been around forever. AI depends upon ML, however, until such a time that AI’s reasoning can sort fact from noise, it will remain a pipe dream.

“Machine learning is not something a small network or small company can support.”

Conversely, I maintain that ML is ideally suited to small networks rather than large ones. Applying the Air Force’s OODA model, the Decision engine only has to deal with a limited subset of traffic patterns and information targets.

John Hodgson April 5, 2017 11:07 PM

Bruce,

The OODA (loop) concept came originally from LtCol John Boyd with regard to his role as a Flight Instructor in the USAAF Advance Flight School. he had a standing bet that he could “shoot down” any Instructor or Student within 40 seconds of the dogfight commencing. He never lost.

Observe Orientate Decide Act was his description of “reactive” thought processes and the strength wasn’t the very basic logic in this cycle, but his ability to repeat the cycle faster than most others. The cycle speed was as important as the logical structure.

His contention was that having developed this thought process ability, he (you) can use the familiarity to “get inside your opponents reactive thought cycles and essentially “Out Think” them. When they start to react to your decisions, you have the advantage.”

You still need the underlying knowledge, experience and craft to make the right decisions from what you have “observed”, and the intellect to Orientate this information in the world events you are occupied with.

Boyd only died in the late 1990s and before then was one of the Pentagon’s “Young Turks”
(at age 50-65) who together changed the focus of the US military towards more distributed thought processes, operational art, greater mobility and operational pace.

He is worth looking up and absorbing much of his written knowledge as I have found much is directly applicable to what we try to achieve in the cyber and security occupations.

John

TomTrottier April 9, 2017 1:01 AM

Automation is a tool, not an answer. Automated responses to a possible threat should be directed to closing the barn doors before all the horses escape.

Automation should be used to reduce uncertainty, particularly the uncertainty of what assets are at risk, and how much. The costs of such shutdowns have to weighed against the cost of not shutting down & the cost/time to restore availability. And, of course, to notify the right people.

sark das June 24, 2018 5:07 AM

AI’s application in terms of military would enable the forces to carry out precision strikes with minimal loss of life, as AI matures further it would enable to carry out maximum damage to the adversary with zero military losses but mountain of civillian casualties could be the possibility.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.