Entries Tagged "risk assessment"

Page 8 of 21

Book Review: Cyber War

Cyber War: The Next Threat to National Security and What to do About It by Richard Clarke and Robert Knake, HarperCollins, 2010.

Cyber War is a fast and enjoyable read. This means you could give the book to your non-techy friends, and they’d understand most of it, enjoy all of it, and learn a lot from it. Unfortunately, while there’s a lot of smart discussion and good information in the book, there’s also a lot of fear-mongering and hyperbole as well. Since there’s no easy way to tell someone what parts of the book to pay attention to and what parts to take with a grain of salt, I can’t recommend it for that purpose. This is a pity, because parts of the book really need to be widely read and discussed.

The fear-mongering and hyperbole is mostly in the beginning. There, the authors describe the cyberwar of novels. Hackers disable air traffic control, delete money from bank accounts, cause widespread blackouts, release chlorine gas from chemical plants, and—this is my favorite—remotely cause your printer to catch on fire. It’s exciting and scary stuff, but not terribly realistic. Even their discussions of previous “cyber wars”—Estonia, Georgia, attacks against U.S. and South Korea on July 4, 2009—are full of hyperbole. A lot of what they write is unproven speculation, but they don’t say that.

Better is the historical discussion of the formation of the U.S. Cyber Command, but there are important omissions. There’s nothing about the cyberwar fear being stoked that accompanied this: by the NSA’s General Keith Alexander—who became the first head of the command—or by the NSA’s former director, current military contractor, by Mike McConnell, who’s Senior Vice President at Booz Allen Hamilton, and by others. By hyping the threat, the former has amassed a lot of power, and the latter a lot of money. Cyberwar is the new cash cow of the military-industrial complex, and any political discussion of cyberwar should include this as well.

Also interesting is the discussion of the asymmetric nature of the threat. A country like the United States, which is heavily dependent on the Internet and information technology, is much more vulnerable to cyber-attacks than a less-developed country like North Korea. This means that a country like North Korea would benefit from a cyberwar exchange: they’d inflict far more damage than they’d incur. This also means that, in this hypothetical cyberwar, there would be pressure on the U.S. to move the war to another theater: air and ground, for example. Definitely worth thinking about.

Most important is the section on treaties. Clarke and Knake have a lot of experience with nuclear treaties, and have done considerable thinking about how to apply that experience to cyberspace. The parallel isn’t perfect, but there’s a lot to learn about what worked and what didn’t, and—more importantly—how things worked and didn’t. The authors discuss treaties banning cyberwar entirely (unlikely), banning attacks against civilians, limiting what is allowed in peacetime, stipulating no first use of cyber weapons, and so on. They discuss cyberwar inspections, and how these treaties might be enforced. Since cyberwar would be likely to result in a new worldwide arms race, one with a more precarious trigger than the nuclear arms race, this part should be read and discussed far and wide. Sadly, it gets lost in the rest of the book. And, since the book lacks an index, it can be hard to find any particular section after you’re done reading it.

In the last chapter, the authors lay out their agenda for the future, which largely I agree with.

  1. We need to start talking publicly about cyber war. This is certainly true. The threat of cyberwar is going to consume the sorts of resources we shoveled into the nuclear threat half a century ago, and a realistic discussion of the threats, risks, countermeasures, and policy choices is essential. We need more universities offering degrees in cyber security, because we need more expertise for the entire gamut of threats.
  2. We need to better defend our military networks, the high-level ISPs, and our national power grid. Clarke and Knake call this the “Defensive Triad.” The authors and I disagree strongly on how this should be done, but there is no doubt that it should be done. The two parts of that triad currently in commercial hands are simply too central to our nation, and too vulnerable, to be left insecure. And their value is far greater to the nation than it is to the corporations that own it, which means the market will not naturally secure it. I agree with the authors that regulation is necessary.
  3. We need to reduce cybercrime. Even without the cyber warriors bit, we need to do that. Cybercrime is bad, and it’s continuing to get worse. Yes, it’s hard. But it’s important.
  4. We need international cyberwar treaties. I couldn’t agree more about this. We do. We need to start thinking about them, talking about them, and negotiating them now, before the cyberwar arms race takes off. There are all kind of issues with cyberwar treaties, and the book talks about a lot of them. However full of loopholes they might be, their existence will do more good than harm.
  5. We need more research on secure network designs. Again, even without the cyberwar bit, this is essential. We need more research in cybersecurity, a lot more.
  6. We need decisions about cyberwar—what weapons to build, what offensive actions to take, who to target—to be made as far up the command structure as possible. Clarke and Knake want the president to personally approve all of this, and I agree. Because of its nature, it can be easy to launch a small-scale cyber attack, and it can be easy for a small-scale attack to get out of hand and turn into a large-scale attack. We need the president to make the decisions, not some low-level military officer ensconced in a computer-filled bunker late one night.

This is great stuff, and a fine starting place for a national policy discussion on cybersecurity, whether it be against a military, espionage, or criminal threat. Unfortunately, for readers to get there, they have to wade through the rest of the book. And unless their bullshit detectors are already well-calibrated on this topic, I don’t want them reading all the hyperbole and fear-mongering that comes before, no matter how readable the book.

Note: I read Cyber War in April, when it first came out. I wanted to write a review then, but found that while my Kindle is great for reading, it’s terrible for flipping back and forth looking for bits and pieces to write about in a review. So I let the review languish. Finally, I borrowed a paper copy from my local library.

Some other reviews of the book Cyber War. See also the reviews on the Amazon page.

I wrote two essays on cyberwar.

Posted on December 21, 2010 at 7:23 AMView Comments

Brian Snow Sows Cyber Fears

That’s no less sensational than the Calgary Herald headline: “Total cyber-meltdown almost inevitable, expert tells Calgary audience.” That’s former NSA Technical Director Brian Snow talking to a university audience.

“It’s long weeks to short months at best before there’s a security meltdown,” said Snow, as a guest lecturer for the Institute for Security, Privacy and Information Assurance, an interdisciplinary group at the university dedicated to information security.

“Will a bank failure be the wake-up call before we act? It’s a global problem—not just the U.S., not just Canada, but the world.”

I know Brian, and I have to believe his definition of “security meltdown” is more limited than the headline leads one to believe.

Posted on December 2, 2010 at 7:06 AMView Comments

The Politics of Allocating Homeland Security Money to States

From the Journal of Homeland Security and Emergency Management: “Politics or Risks? An Analysis of Homeland Security Grant Allocations to the States.”

Abstract: In the days following the September 11 terrorist attacks on the United States, the nation’s elected officials created the USA Patriot Act. The act included a grant program for the 50 states that was intended to assist them with homeland security and preparedness efforts. However, not long after its passage, critics charged the Department of Homeland Security with allocating the grant funds on the basis of “politics” rather than “risk.” This study analyzes the allocation of funds through all seven of the grant subprograms for the years 2003 through 2006. Conducting a linear regression analysis for each year, our research indicates that the total per capita amounts are inversely related to risk factors but are not related at all to partisan political factors between 2003-2005. In 2006, Congress changed the formula with the intention of increasing the relationship between allocations and risk. However, our findings reveal that this change did not produce the intended effect and the allocations were still negatively related to risk and unrelated to partisan politics.

I’m not sure I buy the methodology, but there it is.

Posted on October 7, 2010 at 7:03 AMView Comments

Cultural Cognition of Risk

This is no surprise:

The people behind the new study start by asking a pretty obvious question: “Why do members of the public disagree—­sharply and persistently—­about facts on which expert scientists largely agree?” (Elsewhere, they refer to the “intense political contestation over empirical issues on which technical experts largely agree.”) In this regard, the numbers from the Pew survey are pretty informative. Ninety-seven percent of the members of the American Association for the Advancement of Science accept the evidence for evolution, but at least 40 percent of the public thinks that major differences remain in scientific opinion on this topic. Clearly, the scientific community isn’t succeeding in making the public aware of its opinion.

According to the new study, this isn’t necessarily the fault of the scientists, though. The authors favor a model, called the cultural cognition of risk, which “refers to the tendency of individuals to form risk perceptions that are congenial to their values.” This wouldn’t apply directly to evolution, but would to climate change: if your cultural values make you less likely to accept the policy implications of our current scientific understanding, then you’ll be less likely to accept the science.

But, as the authors note, opponents of a scientific consensus often try to claim to be opposing it on scientific, rather than cultural grounds. “Public debates rarely feature open resistance to science,” they note, “the parties to such disputes are much more likely to advance diametrically opposed claims about what the scientific evidence really shows.” To get there, those doing the arguing must ultimately be selective about what evidence and experts they accept—­they listen to, and remember, those who tell them what they want to hear. “The cultural cognition thesis predicts that individuals will more readily recall instances of experts taking the position that is consistent with their cultural predisposition than ones taking positions inconsistent with it,” the paper suggests.

[…]

So, it’s not just a matter of the public not understanding the expert opinions of places like the National Academies of science; they simply discount the expertise associated with any opinion they’d rather not hear.

Here’s the paper.

Posted on September 28, 2010 at 6:33 AMView Comments

Questioning Terrorism Policy

Worth reading:

…what if we chose to accept the fact that every few years, despite all reasonable precautions, some hundreds or thousands of us may die in the sort of ghastly terrorist attack that a democratic republic cannot 100-percent protect itself from without subverting the very principles that make it worth protecting?

Is this thought experiment monstrous? Would it be monstrous to refer to the 40,000-plus domestic highway deaths we accept each year because the mobility and autonomy of the car are evidently worth that high price? Is monstrousness why no serious public figure now will speak of the delusory trade-off of liberty for safety that Ben Franklin warned about more than 200 years ago? What exactly has changed between Franklin’s time and ours? Why now can we not have a serious national conversation about sacrifice, the inevitability of sacrifice—either of (a) some portion of safety or (b) some portion of the rights and protections that make the American idea so incalculably precious?

Posted on September 18, 2010 at 6:05 AMView Comments

Parental Fears vs. Realities

From NPR:

Based on surveys Barnes collected, the top five worries of parents are, in order:

  1. Kidnapping
  2. School snipers
  3. Terrorists
  4. Dangerous strangers
  5. Drugs

But how do children really get hurt or killed?

  1. Car accidents
  2. Homicide (usually committed by a person who knows the child, not a stranger)
  3. Abuse
  4. Suicide
  5. Drowning

Why such a big discrepancy between worries and reality? Barnes says parents fixate on rare events because they internalize horrific stories they hear on the news or from a friend without stopping to think about the odds the same thing could happen to their children.

No surprise to any regular reader of this blog.

Posted on September 8, 2010 at 6:06 AMView Comments

Book Review: How Risky Is It, Really?

David Ropeik is a writer and consultant who specializes in risk perception and communication. His book, How Risky Is It, Really?: Why Our Fears Don’t Always Match the Facts, is a solid introduction to the biology, psychology, and sociology of risk. If you’re well-read on the topic already, you won’t find much you didn’t already know. But if this is a new topic for you, or if you want a well-organized guide to the current research on risk perception all in one place, this pretty close to the perfect book.

Ropeik builds his model of human risk perception from the inside out. Chapter 1 is about fear, our largely subconscious reaction to risk. Chapter 2 discusses bounded rationality, the cognitive shortcuts that allow us to efficiently make risk trade-offs. Chapter 3 discusses some of the common cognitive biases we have that cause us to either overestimate or underestimate risk: trust, control, choice, natural vs. man-made, fairness, etc.—thirteen in all. Finally, Chapter 4 discusses the sociological aspects of risk perception: how our estimation of risk depends on that of the people around us.

The book is primarily about how we humans get risk wrong: how our perception of risk differs from the reality of risk. But Ropeik is careful not to use the word “wrong,” and repeatedly warns us not to do it. Risk perception is not right or wrong, he says; it simply is. I don’t agree with this. There is both a feeling and reality of risk and security, and when they differ, we make bad security trade-offs. If you think your risk of dying in a terrorist attack, or of your children being kidnapped, is higher than it really is, you’re going to make bad security trade-offs. Yes, security theater has its place, but we should try to make that place as small as we can.

In Chapter 5, Ropeik tries his hand at solutions to this problem: “closing the perception gap” is how he puts it; reducing the difference between the feeling of security and the reality is how I like to explain it. This is his weakest chapter, but it’s also a very hard problem. My writings along this line are similarly weak. Still, his ideas are worth reading and thinking about.

I don’t have any other complaints with the book. Ropeik nicely balances readability with scientific rigor, his examples are interesting and illustrative, and he is comprehensive without being boring. Extensive footnotes allow the reader to explore the actual research behind the generalities. Even though I didn’t learn much from reading it, I enjoyed the ride.

How Risky Is It, Really? is available in hardcover and for the Kindle. Presumably a paperback will come out in a year or so. Ropeik has a blog, although he doesn’t update it much.

Posted on August 2, 2010 at 6:38 AMView Comments

The Threat of Cyberwar Has Been Grossly Exaggerated

There’s a power struggle going on in the U.S. government right now.

It’s about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.

“The United States is fighting a cyberwar today, and we are losing,” said former NSA director—and current cyberwar contractor—Mike McConnell. “Cyber 9/11 has happened over the last ten years, but it happened slowly so we don’t see it,” said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.

General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn’t just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.

Googling those names and terms—as well as “cyber Pearl Harbor,” “cyber Katrina,” and even “cyber Armageddon“—gives some idea how pervasive these memes are. Prefix “cyber” to something scary, and you end up with something really scary.

Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.

These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as “cyberwar.” The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.

A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.

Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists—or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.

One problem is that there’s no clear definition of “cyberwar.” What does it look like? How does it start? When is it over? Even cybersecurity experts don’t know the answers to these questions, and it’s dangerous to broadly apply the term “war” unless we know a war is going on.

Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we’re not talking about real war here, but a rhetorical war: like the war on terror.

We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don’t know when you’re being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea—or perhaps England. Or was it Florida?

We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There’s a power struggle going on for control of our nation’s cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military’s expansive cyberspace definition of “war,” we feed our fears.

We reinforce the notion that we’re helpless—what person or organization can defend itself in a war?—and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.

If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it’s done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.

We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we’re not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.

This essay previously appeared on CNN.com.

EDITED TO ADD (7/7): Earlier this month, I participated in a debate: “The Cyberwar Threat has been Grossly Exaggerated.” (Transcript here, video here.) Marc Rotenberg of EPIC and I were for the motion; Mike McConnell and Jonathan Zittrain were against. We lost.

We lost fair and square, for a bunch of reasons—we didn’t present our case very well, Jonathan Zittrain is a way better debater than we were—but basically the vote came down to the definition of “cyberwar.” If you believed in an expansive definition of cyberwar, one that encompassed a lot more types of attacks than traditional war, then you voted against the motion. If you believed in a limited definition of cyberwar, one that is a subset of traditional war, then you voted for it.

This continues to be an important debate.

EDITED TO ADD (7/7): Last month the Senate Homeland Security Committee held hearings on “Protecting Cyberspace as a National Asset: Comprehensive Legislation for the 21st Century.” Unfortunately, the DHS is getting hammered at these hearings, and the NSA is consolidating its power.

EDITED TO ADD (7/7): North Korea was probably not responsible for last year’s cyberattacks. Good thing we didn’t retaliate.

Posted on July 7, 2010 at 12:58 PMView Comments

The Continuing Incompetence of Terrorists

The Atlantic on stupid terrorists:

Nowhere is the gap between sinister stereotype and ridiculous reality more apparent than in Afghanistan, where it’s fair to say that the Taliban employ the world’s worst suicide bombers: one in two manages to kill only himself. And this success rate hasn’t improved at all in the five years they’ve been using suicide bombers, despite the experience of hundreds of attacks—or attempted attacks. In Afghanistan, as in many cultures, a manly embrace is a time-honored tradition for warriors before they go off to face death. Thus, many suicide bombers never even make it out of their training camp or safe house, as the pressure from these group hugs triggers the explosives in suicide vests. According to several sources at the United Nations, as many as six would-be suicide bombers died last July after one such embrace in Paktika.

Many Taliban operatives are just as clumsy when suicide is not part of the plan. In November 2009, several Talibs transporting an improvised explosive device were killed when it went off unexpectedly. The blast also took out the insurgents’ shadow governor in the province of Balkh.

When terrorists do execute an attack, or come close, they often have security failures to thank, rather than their own expertise. Consider Umar Farouk Abdulmutallab—the Nigerian “Jockstrap Jihadist” who boarded a Detroit-bound jet in Amsterdam with a suicidal plan in his head and some explosives in his underwear. Although the media colored the incident as a sophisticated al-Qaeda plot, Abdulmutallab showed no great skill or cunning, and simple safeguards should have kept him off the plane in the first place. He was, after all, traveling without luggage, on a one-way ticket that he purchased with cash. All of this while being on a U.S. government watch list.

Fortunately, Abdulmutallab, a college-educated engineer, failed to detonate his underpants. A few months later another college grad, Faisal Shahzad, is alleged to have crudely rigged an SUV to blow up in Times Square. That plan fizzled and he was quickly captured, despite the fact that he was reportedly trained in a terrorist boot camp in Pakistan. Indeed, though many of the terrorists who strike in the West are well educated, their plots fail because they lack operational know-how. On June 30, 2007, two men—one a medical doctor, the other studying for his Ph.D.—attempted a brazen attack on Glasgow Airport. Their education did them little good. Planning to crash their propane-and-petrol-laden Jeep Cherokee into an airport terminal, the men instead steered the SUV, with flames spurting out its windows, into a security barrier. The fiery crash destroyed only the Jeep, and both men were easily apprehended; the driver later died from his injuries. (The day before, the same men had rigged two cars to blow up near a London nightclub. That plan was thwarted when one car was spotted by paramedics and the other, parked illegally, was removed by a tow truck. As a bonus for investigators, the would-be bombers’ cell phones, loaded with the phone numbers of possible accomplices, were salvaged from the cars.)

Reminds me of my own “Portrait of the Modern Terrorist as an Idiot.”

Posted on June 18, 2010 at 5:49 AMView Comments

1 6 7 8 9 10 21

Sidebar photo of Bruce Schneier by Joe MacInnis.