Entries Tagged "risks"

Page 11 of 16

Brian Snow Sows Cyber Fears

That’s no less sensational than the Calgary Herald headline: “Total cyber-meltdown almost inevitable, expert tells Calgary audience.” That’s former NSA Technical Director Brian Snow talking to a university audience.

“It’s long weeks to short months at best before there’s a security meltdown,” said Snow, as a guest lecturer for the Institute for Security, Privacy and Information Assurance, an interdisciplinary group at the university dedicated to information security.

“Will a bank failure be the wake-up call before we act? It’s a global problem—not just the U.S., not just Canada, but the world.”

I know Brian, and I have to believe his definition of “security meltdown” is more limited than the headline leads one to believe.

Posted on December 2, 2010 at 7:06 AMView Comments

Securing the Washington Monument

Good article on security options for the Washington Monument:

Unfortunately, the bureaucratic gears are already grinding, and what will be presented to the public Monday doesn’t include important options, including what became known as the “tunnel” in previous discussions of the issue. Nor does it include the choice of more minimal visitor screening—simple wanding or visual bag inspection—that might not require costly and intrusive changes to the structure. The choice to accept risk isn’t on the table, either. Finally, and although it might seem paradoxical given how important resisting security authoritarianism is to preserving the symbolism of freedom, it doesn’t take seriously the idea that perhaps the monument’s interior should be closed altogether—a small concession that might have collateral benefits.

[…]

Closing the interior of the monument, the construction of which was suspended during the Civil War, would remind the public of the effect that fears engendered by the current war on terrorism have had on public space. Closing it as a symbolic act might initiate an overdue discussion about the loss of even more important public spaces, including the front entrance of the Supreme Court and the west terrace of the Capitol. It would be a dramatic reminder of the choices we as a nation have made, and perhaps an inspiration to change our ways in favor of a more open, risk-tolerant society that understands public space always has some element of danger.

EDITED TO ADD (11/15): More information on the decision process.

Posted on November 10, 2010 at 7:09 AMView Comments

The Politics of Allocating Homeland Security Money to States

From the Journal of Homeland Security and Emergency Management: “Politics or Risks? An Analysis of Homeland Security Grant Allocations to the States.”

Abstract: In the days following the September 11 terrorist attacks on the United States, the nation’s elected officials created the USA Patriot Act. The act included a grant program for the 50 states that was intended to assist them with homeland security and preparedness efforts. However, not long after its passage, critics charged the Department of Homeland Security with allocating the grant funds on the basis of “politics” rather than “risk.” This study analyzes the allocation of funds through all seven of the grant subprograms for the years 2003 through 2006. Conducting a linear regression analysis for each year, our research indicates that the total per capita amounts are inversely related to risk factors but are not related at all to partisan political factors between 2003-2005. In 2006, Congress changed the formula with the intention of increasing the relationship between allocations and risk. However, our findings reveal that this change did not produce the intended effect and the allocations were still negatively related to risk and unrelated to partisan politics.

I’m not sure I buy the methodology, but there it is.

Posted on October 7, 2010 at 7:03 AMView Comments

Cultural Cognition of Risk

This is no surprise:

The people behind the new study start by asking a pretty obvious question: “Why do members of the public disagree—­sharply and persistently—­about facts on which expert scientists largely agree?” (Elsewhere, they refer to the “intense political contestation over empirical issues on which technical experts largely agree.”) In this regard, the numbers from the Pew survey are pretty informative. Ninety-seven percent of the members of the American Association for the Advancement of Science accept the evidence for evolution, but at least 40 percent of the public thinks that major differences remain in scientific opinion on this topic. Clearly, the scientific community isn’t succeeding in making the public aware of its opinion.

According to the new study, this isn’t necessarily the fault of the scientists, though. The authors favor a model, called the cultural cognition of risk, which “refers to the tendency of individuals to form risk perceptions that are congenial to their values.” This wouldn’t apply directly to evolution, but would to climate change: if your cultural values make you less likely to accept the policy implications of our current scientific understanding, then you’ll be less likely to accept the science.

But, as the authors note, opponents of a scientific consensus often try to claim to be opposing it on scientific, rather than cultural grounds. “Public debates rarely feature open resistance to science,” they note, “the parties to such disputes are much more likely to advance diametrically opposed claims about what the scientific evidence really shows.” To get there, those doing the arguing must ultimately be selective about what evidence and experts they accept—­they listen to, and remember, those who tell them what they want to hear. “The cultural cognition thesis predicts that individuals will more readily recall instances of experts taking the position that is consistent with their cultural predisposition than ones taking positions inconsistent with it,” the paper suggests.

[…]

So, it’s not just a matter of the public not understanding the expert opinions of places like the National Academies of science; they simply discount the expertise associated with any opinion they’d rather not hear.

Here’s the paper.

Posted on September 28, 2010 at 6:33 AMView Comments

Questioning Terrorism Policy

Worth reading:

…what if we chose to accept the fact that every few years, despite all reasonable precautions, some hundreds or thousands of us may die in the sort of ghastly terrorist attack that a democratic republic cannot 100-percent protect itself from without subverting the very principles that make it worth protecting?

Is this thought experiment monstrous? Would it be monstrous to refer to the 40,000-plus domestic highway deaths we accept each year because the mobility and autonomy of the car are evidently worth that high price? Is monstrousness why no serious public figure now will speak of the delusory trade-off of liberty for safety that Ben Franklin warned about more than 200 years ago? What exactly has changed between Franklin’s time and ours? Why now can we not have a serious national conversation about sacrifice, the inevitability of sacrifice—either of (a) some portion of safety or (b) some portion of the rights and protections that make the American idea so incalculably precious?

Posted on September 18, 2010 at 6:05 AMView Comments

Book Review: How Risky Is It, Really?

David Ropeik is a writer and consultant who specializes in risk perception and communication. His book, How Risky Is It, Really?: Why Our Fears Don’t Always Match the Facts, is a solid introduction to the biology, psychology, and sociology of risk. If you’re well-read on the topic already, you won’t find much you didn’t already know. But if this is a new topic for you, or if you want a well-organized guide to the current research on risk perception all in one place, this pretty close to the perfect book.

Ropeik builds his model of human risk perception from the inside out. Chapter 1 is about fear, our largely subconscious reaction to risk. Chapter 2 discusses bounded rationality, the cognitive shortcuts that allow us to efficiently make risk trade-offs. Chapter 3 discusses some of the common cognitive biases we have that cause us to either overestimate or underestimate risk: trust, control, choice, natural vs. man-made, fairness, etc.—thirteen in all. Finally, Chapter 4 discusses the sociological aspects of risk perception: how our estimation of risk depends on that of the people around us.

The book is primarily about how we humans get risk wrong: how our perception of risk differs from the reality of risk. But Ropeik is careful not to use the word “wrong,” and repeatedly warns us not to do it. Risk perception is not right or wrong, he says; it simply is. I don’t agree with this. There is both a feeling and reality of risk and security, and when they differ, we make bad security trade-offs. If you think your risk of dying in a terrorist attack, or of your children being kidnapped, is higher than it really is, you’re going to make bad security trade-offs. Yes, security theater has its place, but we should try to make that place as small as we can.

In Chapter 5, Ropeik tries his hand at solutions to this problem: “closing the perception gap” is how he puts it; reducing the difference between the feeling of security and the reality is how I like to explain it. This is his weakest chapter, but it’s also a very hard problem. My writings along this line are similarly weak. Still, his ideas are worth reading and thinking about.

I don’t have any other complaints with the book. Ropeik nicely balances readability with scientific rigor, his examples are interesting and illustrative, and he is comprehensive without being boring. Extensive footnotes allow the reader to explore the actual research behind the generalities. Even though I didn’t learn much from reading it, I enjoyed the ride.

How Risky Is It, Really? is available in hardcover and for the Kindle. Presumably a paperback will come out in a year or so. Ropeik has a blog, although he doesn’t update it much.

Posted on August 2, 2010 at 6:38 AMView Comments

The Threat of Cyberwar Has Been Grossly Exaggerated

There’s a power struggle going on in the U.S. government right now.

It’s about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.

“The United States is fighting a cyberwar today, and we are losing,” said former NSA director—and current cyberwar contractor—Mike McConnell. “Cyber 9/11 has happened over the last ten years, but it happened slowly so we don’t see it,” said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.

General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn’t just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.

Googling those names and terms—as well as “cyber Pearl Harbor,” “cyber Katrina,” and even “cyber Armageddon“—gives some idea how pervasive these memes are. Prefix “cyber” to something scary, and you end up with something really scary.

Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.

These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as “cyberwar.” The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.

A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.

Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists—or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.

One problem is that there’s no clear definition of “cyberwar.” What does it look like? How does it start? When is it over? Even cybersecurity experts don’t know the answers to these questions, and it’s dangerous to broadly apply the term “war” unless we know a war is going on.

Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we’re not talking about real war here, but a rhetorical war: like the war on terror.

We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don’t know when you’re being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea—or perhaps England. Or was it Florida?

We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There’s a power struggle going on for control of our nation’s cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military’s expansive cyberspace definition of “war,” we feed our fears.

We reinforce the notion that we’re helpless—what person or organization can defend itself in a war?—and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.

If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it’s done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.

We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we’re not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.

This essay previously appeared on CNN.com.

EDITED TO ADD (7/7): Earlier this month, I participated in a debate: “The Cyberwar Threat has been Grossly Exaggerated.” (Transcript here, video here.) Marc Rotenberg of EPIC and I were for the motion; Mike McConnell and Jonathan Zittrain were against. We lost.

We lost fair and square, for a bunch of reasons—we didn’t present our case very well, Jonathan Zittrain is a way better debater than we were—but basically the vote came down to the definition of “cyberwar.” If you believed in an expansive definition of cyberwar, one that encompassed a lot more types of attacks than traditional war, then you voted against the motion. If you believed in a limited definition of cyberwar, one that is a subset of traditional war, then you voted for it.

This continues to be an important debate.

EDITED TO ADD (7/7): Last month the Senate Homeland Security Committee held hearings on “Protecting Cyberspace as a National Asset: Comprehensive Legislation for the 21st Century.” Unfortunately, the DHS is getting hammered at these hearings, and the NSA is consolidating its power.

EDITED TO ADD (7/7): North Korea was probably not responsible for last year’s cyberattacks. Good thing we didn’t retaliate.

Posted on July 7, 2010 at 12:58 PMView Comments

The Continuing Incompetence of Terrorists

The Atlantic on stupid terrorists:

Nowhere is the gap between sinister stereotype and ridiculous reality more apparent than in Afghanistan, where it’s fair to say that the Taliban employ the world’s worst suicide bombers: one in two manages to kill only himself. And this success rate hasn’t improved at all in the five years they’ve been using suicide bombers, despite the experience of hundreds of attacks—or attempted attacks. In Afghanistan, as in many cultures, a manly embrace is a time-honored tradition for warriors before they go off to face death. Thus, many suicide bombers never even make it out of their training camp or safe house, as the pressure from these group hugs triggers the explosives in suicide vests. According to several sources at the United Nations, as many as six would-be suicide bombers died last July after one such embrace in Paktika.

Many Taliban operatives are just as clumsy when suicide is not part of the plan. In November 2009, several Talibs transporting an improvised explosive device were killed when it went off unexpectedly. The blast also took out the insurgents’ shadow governor in the province of Balkh.

When terrorists do execute an attack, or come close, they often have security failures to thank, rather than their own expertise. Consider Umar Farouk Abdulmutallab—the Nigerian “Jockstrap Jihadist” who boarded a Detroit-bound jet in Amsterdam with a suicidal plan in his head and some explosives in his underwear. Although the media colored the incident as a sophisticated al-Qaeda plot, Abdulmutallab showed no great skill or cunning, and simple safeguards should have kept him off the plane in the first place. He was, after all, traveling without luggage, on a one-way ticket that he purchased with cash. All of this while being on a U.S. government watch list.

Fortunately, Abdulmutallab, a college-educated engineer, failed to detonate his underpants. A few months later another college grad, Faisal Shahzad, is alleged to have crudely rigged an SUV to blow up in Times Square. That plan fizzled and he was quickly captured, despite the fact that he was reportedly trained in a terrorist boot camp in Pakistan. Indeed, though many of the terrorists who strike in the West are well educated, their plots fail because they lack operational know-how. On June 30, 2007, two men—one a medical doctor, the other studying for his Ph.D.—attempted a brazen attack on Glasgow Airport. Their education did them little good. Planning to crash their propane-and-petrol-laden Jeep Cherokee into an airport terminal, the men instead steered the SUV, with flames spurting out its windows, into a security barrier. The fiery crash destroyed only the Jeep, and both men were easily apprehended; the driver later died from his injuries. (The day before, the same men had rigged two cars to blow up near a London nightclub. That plan was thwarted when one car was spotted by paramedics and the other, parked illegally, was removed by a tow truck. As a bonus for investigators, the would-be bombers’ cell phones, loaded with the phone numbers of possible accomplices, were salvaged from the cars.)

Reminds me of my own “Portrait of the Modern Terrorist as an Idiot.”

Posted on June 18, 2010 at 5:49 AMView Comments

Hot Dog Security

A nice dose of risk reality:

Last week, the American Academy of Pediatrics issued a statement calling for large-type warning labels on the foods that kids most commonly choke on—grapes, nuts, carrots, candy and public enemy No. 1: the frank. Then the lead author of the report, pediatric emergency room doctor Gary Smith, went one step further.

He called for a redesign of the hot dog.

The reason, he said, is that hot dogs are “high-risk.” But are they? I mean, I certainly diced my share of Oscar Mayers when my kids were younger, but if once in a while we stopped for a hot dog and I gave it to ’em whole, was I really taking a crazy risk?

Here are the facts: About 61 children each year choke to death on food, or one in a million. Of them, 17 percent—or about 10—choke on franks. So now we are talking 1 in 6 million. This is still tragic; the death of any child is. But to call it “high-risk” means we would have to call pretty much all of life “high-risk.” Especially getting in a car! About 1,300 kids younger than 14 die each year as car passengers, compared with 10 a year from hot dogs.

What’s happening is that the concept of “risk” is broadening to encompass almost everything a kid ever does, from running to sitting to sleeping. Literally!

There’s a lot of good stuff on this website about how to raise children without being crazy paranoid. She comments on my worst-case thinking essay, too.

Posted on June 17, 2010 at 2:28 PMView Comments

1 9 10 11 12 13 16

Sidebar photo of Bruce Schneier by Joe MacInnis.