The Future of Incident Response

Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.

This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.

Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)—attacks that specifically target for reasons other than simple financial theft—brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.

And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.

Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.

At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.

Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.

This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.

The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.

The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing—and computer and network IR.

Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy—if you can “get inside his OODA loop”—then you have an enormous advantage.

We need tools to facilitate all of these steps:

  • Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information—and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
  • Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
  • Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
  • Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network—all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.

Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.

This essay originally appeared in IEEE Security & Privacy.

Posted on November 10, 2014 at 6:51 AM16 Comments

Comments

Clive Robinson November 10, 2014 9:40 AM

@ Bruce,

Whilst it has taken a decade each for the first two steps, I suspect getting response right is going to take rather more than a decade or two.

The reason is history, we have had an independent civilian police forces for a little over two hundred years [1]. Prior to that we had the Military or Government forces [2], or a rabel of vigilanties stretching back for thousands of years.

As we know “Policing by gun point” is not exactly conducive to justice or civil peace, and as such should be avoided where ever possible, which is why we generaly only let the military police civilians in times of dire national emergancy, or where civil law has ceased to exist due to civil unrest or war.

But worse than the military is response by vigilante, history tells us a great deal as to why this is such a terrible way for any society to police it’s self.

But that is where we are with “cyber-security” response currently, and apart from the empire grabbing attempts by those we find the least desirable (NSA et al) to carry out cyber-response, our elected officials appear incapable of actually progressing past the FUD the empire builders churn out to a rational policy on how a cyber-response organisation should be formulated, regulated, sanctioned and placed within society.

And the sooner we actually get off of the Politicaly inspired “Cyber-Warfare” road to perdition the better, police activities are, contrary to TV shows, investigatory not guns drawn, body armoured, door smashing paramilitary or military engagements.

But to many Politico’s and War Hawks have the light of battle in their eyes and those further down appear to want Walter Mitty type mental images to become a reality in their minds, thus crave the guns, grenades and military trapings, the industrial part of the MIC want to sell usually at vastly over inflated prices.

[1] Glasgow City Police were formed in 1800, the Royal Irish Police in 1822 and the famous “Peelers” a few years later.

[2] In many places still the tie in between elected officials and working police officers is at large and it is difficult to see how they can be fair and impartial when their futures are decided by bosses with political motivation and re-election as a primary focus on moving up their own career path.

Bob S. November 10, 2014 10:45 AM

Sounds like a fair minded battle plan to me.

And, what you (we) are dealing with is an all out world wide cyber war by governments, corporations and crooks of the world against the people for domination and control of internet data.

Some want power, some want money, some want it all. At this point it appears world governments and corporations are set against the people, so laws supporting basic internet rights will not be granted and those supposed rights we have, such as the first, fourth and fifth amendments will continue to be degraded.

That leaves technical experts as cyber warriors for the people. Many of them have gone to the other side already unfortunately.

It’s a giant Whack-a-Mole game. I’m not willing to bet on on winner right now. On whole I would say we the people are losing.

Best wishes, Bruce. Fight the good fight.

vas pup November 10, 2014 2:41 PM

@Bruce:”All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.”
Unfortunately, yes. Do we have a right for self defense (rooted in our right to l i f e, l i b e r t y and pursuit of happiness)? Do we have a right to balanced and appropriate a c t i v e self defense (stand your ground Laws)? Could ubiquitous (in time and space) protection be provided by institutionalized forces (LEAs of all flavors)? Should we be responsible (civil) for harm to perpetrator within scope of reasonable active self defense? How preemptive strike paradigm used by countries for justification of active actions towards prevention imminent aggression is (if at all) mapping to citizen/business environment?
Bruce,
When you are going to write new book related to IR, the answers to those questions need to be clarified as well with comparison of physical and IT security as well as personal and institutional level.

Thomas November 10, 2014 6:17 PM

Is the organization […] planning layoffs? […] a new strategic partner?

So the IR team, which may be outsourced, needs access to confidential strategic information.

Wonderful example of security implications of a security system.

Bruce Schneier November 10, 2014 7:04 PM

“So the IR team, which may be outsourced, needs access to confidential strategic information. Wonderful example of security implications of a security system.”

Yes, but this is not uncommon in expert-level outsourcing. Think of outsourced legal services, outsourced HR, outsourced tax preparation.

SoWhatDidYouExpect November 10, 2014 7:42 PM

@Bruce:

Your last comment reminded me of a time in my former day job, when we were early in the internet game, with a collection of various applications for customers & dealers. Much of the work was outsouced to several local suppliers.

One of the key people on one supplier team was leaving the area, but it turned out that his personal userid/password were part of a major system. The business owner wanted to leave it in place, even though we would have no control over the individual that was leaving. (by leaving the area, I think it meant going back to India)

There was a big fight over this, and sad to say, much of our outsourced goverment work is in the same boat.

Thomas November 10, 2014 8:03 PM

Yes, but this is not uncommon in expert-level outsourcing.
Think of outsourced legal services, outsourced HR, outsourced tax preparation.

What you tell your lawyer is protected by (from?) the law, and if the lawyer talks (whistle-blows?) he faces disbarment.

Until IR teams have similar protection and consequences I can see how people might be less forthcoming.

Andrew_K November 11, 2014 7:40 AM

Bruce,

to me, the “observe” point seems notoriously conflicting with privacy interests. From a privacy point of view, I do not want to collect some data nor do I want to have them available as live data.
I just pick the physical security point. I assume physical security in this case to be something like presence detection sensors, light switch status, door sensors or whatever might be available in an environment.
Collecting this kind of information poses a problem, since it can be used to follow persons working in the building.

My point: The struggle of usability vs. security is obvious.
But what about privacy?

Bruce Schneier November 11, 2014 7:49 AM

“…to me, the “observe” point seems notoriously conflicting with privacy interests.”

Agreed. And, yes, this is a conflict. It’s easier when it’s a corporate network: the network is owned by the corporation for the benefit of the corporation, so they have the overriding interest. It’s harder when it’s a more general network, like a network that a hotel runs for its guests.

Justin November 11, 2014 12:01 PM

Incident response is hard problem. In a sense it’s closing the barn door after the cows got out, but it’s necessary work.

To my thought, security (particularly incident response) isn’t something that can just be outsourced and then not given another thought, but I suppose a lot of businesses just don’t pay much attention to security until an incident happens. Or they just aren’t very knowledgeable about computer security.

I think a large part of a legitimate “incident response” service is not just to clean up after the incident, but to take the opportunity to evaluate the business’ security policies and practices, make appropriate recommendations to improve these, and educate relevant personnel at the business.

The business itself (management) needs to evaluate what was lost or damaged in the incident and the extent of the harm suffered as a result, and do a cost-benefit analysis of following recommendations for improved security.

John November 11, 2014 9:24 PM

I found this article informative. My main thought while reading this article is how can an outside party possibly be as familiar and informed about a corporation’s networks, compared to the everyday IT staffers who administrate the network on a daily basis.

It sounds like the IR team works together with the corporation’s IT team. Of course the IT team doesn’t call up the IR team until they’ve detected an incident. That’s if the IT team even notices the incident.

I believe the only way to take back control of our computing environment, is to start developing more secure programming languages that don’t require 30 years of experience to program a secure piece of software.

Clive Robinson November 12, 2014 7:35 AM

@ John,

An external entity cannot be as familiar with an organisations network, as those who administer it. However that is not of necessity a disadvantage, in fact it can be of benifit.

Those familiar with the network can and often do suffer from bias caused by their familiarity, and thus jump to conclusions based on either too little information or assumptions that are actually not valid.

Further as administrators they are not likely to be as well practiced at dealing with certain types of exception behaviour as those who’s job it is to deal with exceptions on a very regular basis as an independent IR team would be.

Furthers if the response team is not independent managment will not know if the administrators are complicit in the exception either by design or some other factor such as insufficient knowledge or lack of preventative measures due to internal organisational issues.

The other issue that is important is what the response is in scope, effect and legal requirments. Are the administrators going to be sufficiently up to date on the law especialy recent case law so that they do not make legal mistakes in gathering evidence or searching out evidence. Sometimes it’s difficult to know where boundaries are and thus if a response is carried out by an administrator this could leave the organisation legaly open potentialy with no defensive to fall back on. An independent response team however will in effect remove this liability from the organisation.

However as I noted in my comment above there is also the question of scope of the response, sensible attackers hide their activities behind unsuspecting others, and in fact any organisation could be just another link in the chain of cutouts attackers might use. Thus the ‘investigation’ issue arises currently the situation is that an organisation is unlikely to get help from the “cyber-war” hawk organisations run by the likes of the intelligence or military departments of government unless they have some kind of “clout”. Further would any non MIC organisation want the NSA et al running lose inside their network leaving god alone knows what backdoors etc behind. Out side of government directly affiliated IC/Mil agencies the police are becoming less and less independent as the IC/Mil agencies use them as “fronts”, two examples being the US FBI and the UK National Crime Agency (SOCA as was). Thus the remaining regional policing organisations that were largely independent of direct government control are having ICT related investigations being taken away from them, thus are unable to assist organisations that either don’t have the “clout” or not of interest to the IC/Mil front government directly affiliated organisations such as the FBI, NCA, et al.

The result will almost certainly be some form of “vigilantism” as some organisations see other corporate entities such as Microsoft and the various IP rights holders associations effectivly act as vigilantes. This as most will realise after some sober thought can not be a good idea as it will end in “IT Shoot Outs” were organisations that are on a particular “cut out chain” start attacking each other out of what they think is “proactive self defence”. As the odds of this happening are highly likely due to human nature –that history has abundant examples of– one has to ask if infact this potential state of affairs is being quite deliberatly engineered to happen, so that the IC/Mil agencies and other vested interests can use it to empire build from the public purse.

Mike Amling November 12, 2014 6:06 PM

“Protection systems are almost technology…”

Protection systems are almost all technology?
Protection systems are mostly technology?

Mikael Witt November 18, 2014 2:57 PM

Good and valid points.

I would say that this point heavily in the direction of Managed Security Services, as many of the needed functions are very expensive to purchase/develop/maintain for an individual company.

I think that detection capability is one of the main problem areas that have to be a priority for effective incident management, security incident response relies heavily on fast and accurate detection, automated enrichment and qualification of the results before any human touch. Product does not deliver the required quality, no matter what they promise in the prospect. Without this, the incident response will be bogged down in alerts, manual searches for additional data and guesswork.

I have a few thoughts on the subject on http://blog.qvasir.com

Really like the OODA definition and which that I had know about it earlier (before my blog post).

Best regards
/Micke

Ceri Charlton January 5, 2015 7:32 AM

I can see two dominant reasons why Incident Response is typically handled poorly.

1) Improving your Incident Response capability is one of those things which is very easy to put off on the grounds that, “We might never need it and X is more important right now…”

2) The overwhelming majority people (even otherwise highly experienced Infosec professionals) are relatively inexperienced when it comes to dealing with real, potentially company-ending breaches. As I used to quip in my role investigating breaches as a regulator, “Most Infosec people only get to see one career-ending security breach first hand; I see them almost weekly.”

No one would claim to be a “Firewall expert” having only looked at one three times, for eight hours and these three occassions were spread over a decade and involved three different models, unless they knew they were being dishonest. A lot of people will, through no fault of their own, never see the inside workings of a big breach that makes the press, never mind the truly grand scale Target, Sony, etc. style ones. Most likely because this experience is rare, the bar is considerably lower in terms of what is considered acceptable in claiming to have ‘extensive experience’ of handling Information Security Incidents.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.