Entries Tagged "science fiction"

Page 1 of 1

Security Orchestration and Incident Response

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­—sometimes with the addition of machine learning or other artificial intelligence techniques ­—and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them—­ security orchestration, not automation.

This isn’t just a choice of words ­—it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­—now part of IBM Security—­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model—­ the dashboards, the reports, the collaboration—­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

Posted on March 29, 2017 at 6:16 AMView Comments

Isaac Asimov on Security Theater

A great find:

In his 1956 short story, “Let’s Get Together,” Isaac Asimov describes security measures proposed to counter a terrorist threat:

“Consider further that this news will leak out as more and more people become involved in our countermeasures and more and more people begin to guess what we’re doing. Then what? The panic might do us more harm than any one TC bomb.”

The Presidential Assistant said irritably, “In Heaven’s name, man, what do you suggest we do, then?”

“Nothing,” said Lynn. “Call their bluff. Live as we have lived and gamble that They won’t dare break the stalemate for the sake of a one-bomb head start.”

“Impossible!” said Jeffreys. “Completely impossible. The welfare of all of Us is very largely in my hands, and doing nothing is the one thing I cannot do. I agree with you, perhaps, that X-ray machines at sports arenas are a kind of skin-deep measure that won’t be effective, but it has to be done so that people, in the aftermath, do not come to the bitter conclusion that we tossed our country away for the sake of a subtle line of reasoning that encouraged donothingism.”

This Jeffreys guy sounds as if he works for the TSA.

Posted on October 3, 2011 at 1:20 PMView Comments

Using Science Fiction to Teach Computer Security

Interesting paper: “Science Fiction Prototyping and Security Education: Cultivating Contextual and Societal Thinking in Computer Security Education and Beyond,” by Tadayoshi Kohno and Brian David Johnson.

Abstract: Computer security courses typically cover a breadth of technical topics, including threat modeling, applied cryptography, software security, and Web security. The technical artifacts of computer systems—and their associated computer security risks and defenses—do not exist in isolation, however; rather, these systems interact intimately with the needs, beliefs, and values of people. This is especially true as computers become more pervasive, embedding themselves not only into laptops, desktops, and the Web, but also into our cars, medical devices, and toys. Therefore, in addition to the standard technical material, we argue that students would benefit from developing a mindset focused on the broader societal and contextual issues surrounding computer security systems and risks. We used science fiction (SF) prototyping to facilitate such societal and contextual thinking in a recent undergraduate computer security course. We report on our approach and experiences here, as well as our recommendations for future computer security and other computer science courses.

Posted on August 1, 2011 at 6:03 AMView Comments

Robert Sawyer's Alibis

Back in 2002, science fiction author Robert J. Sawyer wrote an essay about the trade-off between privacy and security, and came out in favor of less privacy. I disagree with most of what he said, and have written pretty much the opposite essay—and others on the value of privacy and the future of privacy—several times since then.

The point of this blog entry isn’t really to debate the topic, though. It’s to reprint the opening paragraph of Sawyer’s essay, which I’ve never forgotten:

Whenever I visit a tourist attraction that has a guest register, I always sign it. After all, you never know when you’ll need an alibi.

Since I read that, whenever I see a tourist attraction with a guest register, I do the same thing. I sign “Robert J. Sawyer, Toronto, ON”—because you never know when he’ll need an alibi.

EDITED TO ADD (9/15): Sawyer’s essay now has a preface, which states that he wrote it to promote a book of his:

The following was written as promotion for my science-fiction novel Hominids, and does not necessarily reflect the author’s personal views.

In the comments below, though, Sawyer says that the essay does not reflect his personal views. So I’m not sure about the waffling on the essay page.

I am completely surprised that Sawyer’s essay was fictional. For years I thought that he meant what he wrote, that it was a non-fiction essay written for a non-fiction publication. He has other essays on his website; I have no idea if any of those reflect his personal views. The whole thing makes absolutely no sense to me.

Posted on September 14, 2009 at 7:24 AMView Comments

Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.