Entries Tagged "incident response"

Page 1 of 2

How Attorneys Are Harming Cybersecurity Incident Response

New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.

So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.

Posted on June 7, 2023 at 7:06 AMView Comments

Security Orchestration and Incident Response

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­—sometimes with the addition of machine learning or other artificial intelligence techniques ­—and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them—­ security orchestration, not automation.

This isn’t just a choice of words ­—it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­—now part of IBM Security—­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model—­ the dashboards, the reports, the collaboration—­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

Posted on March 29, 2017 at 6:16 AMView Comments

Credential Stealing as an Attack Vector

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and—most importantly—incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

EDITED TO ADD (5/23): Portuguese translation.

Posted on May 4, 2016 at 6:51 AMView Comments

Resilient Systems News: IBM to Buy Resilient Systems

Today, IBM announced its intention to purchase my company, Resilient Systems. (Yes, the rumors were basically true.)

I think this is a great development for Resilient Systems and its incident-response platform. (I know, but that’s what analysts are calling it.) IBM is an ideal partner for Resilient, and one that I have been quietly hoping would acquire it for over a year now. IBM has a unique combination of security products and services, and an existing organization that will help Resilient immeasurably. It’s a good match.

Last year, Resilient integrated with IBM’s SIEM—that’s Security Event and Incident Management—system, QRadar. My guess is that’s what attracted IBM to us in the first place. Resilient has the platform that makes QRadar actionable. Conversely, QRadar makes Resilient’s platform more powerful. The products are each good separately, but really good together.

And to IBM’s credit, it understood that its customers have all sorts of protection and detection security products—both IBM’s and others—and no single response hub to make sense of it all. This is what Resilient does extremely well, and can now do for IBM’s customers globally.

IBM is one of the largest enterprise security companies in the world. That’s not obvious; the 6,500-person IBM Security organization gets lost in the 390,000-person company. It has $2 billion in annual sales. It has a great reputation with both customers and analysts. And while Resilient is the industry leader in its field and has a great reputation, large companies like to buy from other large companies. Resilient has repeatedly sold to large enterprise customers, but it always takes some convincing. Being part of IBM makes it a safe choice. IBM also has a sales and service force that will allow Resilient to scale quickly. The company could have done it on its own eventually, but it would have taken many years.

It’s a sad reality in tech is that too often—once, unfortunately, in my personal experience—acquisitions don’t work out for either the acquirer or the acquiree. Deals are made in optimism, but the reality is much less rosy.

I don’t think that will happen here. As an acquirer, IBM has a history of effectively integrating the teams and the technologies it acquires. It has bought something like 15 security companies in the past decade—five in the past two years alone—and has (more or less) successfully integrated all of them. It carefully selects the companies it buys, spending a lot of time making sure the integration is successful. I was stunned by the amount of work the people from IBM did over the past two months, analyzing every nook and cranny of Resilient in detail: both to verify what they were buying and to figure out how to successfully integrate it.

IBM is going through a lot of reorganizing right now, but security is one of its big bets. It’s the fastest-growing vendor in the industry. It hired 1,000 security people in 2015. It needs to continue to grow, and Resilient is now a part of that growth.

Finally, IBM is an East Coast company. This may seem like a trivial point, but Resilient Systems is very much a product of the Boston area. I didn’t want Resilient to be a far-flung satellite of a Silicon Valley company. IBM Security is also headquartered in Cambridge, just five T stops away. That’s way better than a seven-hour no-legroom bad-food transcontinental flight away.

Random aside: this will be the third company I will have worked for whose name is no longer an acronym for its longer, original, name.

When I joined Resilient Systems just over two years ago, I assumed that it would eventually be purchased by a large and diversified company. Acquisitions in the security space are hot right now, and I have long believed that security will be subsumed by more general IT services. Surveying the field, IBM was always at the top of my list. Resilient had several suitors who expressed interest in purchasing it, as well as many investors who wanted to put money into the company. This was our best option.

We’re still working out what I’ll be doing at IBM; these months focused more on the company than on me personally. I know they want me to be involved in all of IBM Security. The people I’ll be working with know I’ll continue to blog and write books. (They also know that my website is way more popular than theirs.) They know I’ll continue to talk about politically sensitive topics. They know they won’t be able to edit or constrain my writings and speaking. At least, they say they know it; we’ll see what actually happens. But I’m optimistic. There are other IBM people whose public writings do not represent the views of IBM—so there’s precedent.

All in all, this is great news for Resilient Systems and—I hope—great news for IBM. We’re still exhibiting at the RSA Conference. I’m still serving a curated cocktail at the booth (#1727, South Hall) on Tuesday from 4:00-6:00. We’re still giving away signed copies of Data and Goliath. I’m not sure what sort of new signage we’ll have. No one liked my idea of a large spray-painted “Under New Management” sign nailed to the side of the booth, but I’m still lobbying for that.

EDITED TO ADD (3/17): This is how IBM is positioning us, at least initially.

Posted on February 29, 2016 at 11:08 AMView Comments

Resilient Systems News

Former Raytheon CEO Bill Swanson has joined our board of directors.

For those who don’t know, Resilient Systems is my company. I’m the CTO, and we sell an incident-response management platform that…well…helps IR teams to manage incidents. It’s a single hub that allows a team to collect data about an incident, assign and manage tasks, automate actions, integrate intelligence information, and so on. It’s designed to be powerful, flexible, and intuitive—if your HR or legal person needs to get involved, she has to be able to use it without any training. I’m really impressed with how well it works. Incident response is all about people, and the platform makes teams more effective. This is probably the best description of what we do.

We have lots of large- and medium-sized companies as customers. They’re all happy, and we continue to sell this thing at an impressive rate. Our Q3 numbers were fantastic. It’s kind of scary, really.

Posted on October 2, 2015 at 2:06 PMView Comments

The Future of Incident Response

Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.

This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.

Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)—attacks that specifically target for reasons other than simple financial theft—brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.

And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.

Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.

At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.

Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.

This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.

The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.

The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing—and computer and network IR.

Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy—if you can “get inside his OODA loop”—then you have an enormous advantage.

We need tools to facilitate all of these steps:

  • Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information—and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
  • Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
  • Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
  • Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network—all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.

Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.

This essay originally appeared in IEEE Security & Privacy.

Posted on November 10, 2014 at 6:51 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.