Entries Tagged "risk assessment"

Page 11 of 21

Why People Don't Understand Risks

Yesterday’s Minneapolis Star Tribune had the front-page headline: “Co-sleeping kills about 20 infants each year.” (The headline in the web article is different.) The only problem is, in either case, there’s no additional information with which to make sense of the statistic.

How many infants don’t die each year? How many infants die each year in separate beds? Is the death rate for co-sleepers greater or less than the death rate for separate-bed sleepers? Without this information, it’s impossible to know whether this statistic is good or bad.

But the media rarely provides context for the data. The story is in the aftermath of an incident where a baby was accidentally smothered in his sleep.

Oh, and that 20-infants-per-year number is for Minnesota only. No word as to whether the situation is better or worse in other states.

Posted on July 7, 2009 at 1:50 PMView Comments

Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AMView Comments

Second SHB Workshop Liveblogging (7)

Session Six—”Terror”—chaired by Stuart Schechter.

Bill Burns, Decision Research (suggested reading: The Diffusion of Fear: Modeling Community Response to a Terrorist Strike), studies social reaction to risk. He discussed his theoretical model of how people react to fear events, and data from the 9/11 attacks, the 7/7 bombings in the UK, and the 2008 financial collapse. Basically, we can’t remain fearful. No matter what happens, fear spikes immediately after and recovers 45 or so days afterwards. He believes that the greatest mistake we made after 9/11 was labeling the event as terrorism instead of an international crime.

Chris Cocking, London Metropolitan University (suggested reading: Effects of social identity on responses to emergency mass evacuation), looks at the group behavior of people responding to emergencies. Traditionally, most emergency planning is based on the panic model: people in crowds are prone to irrational behavior and panic. There’s also a social attachment model that predicts that social norms don’t break down in groups. He prefers a self-categorization approach: disasters create a common identity, which results in orderly and altruistic behavior among strangers. The greater the threat, the greater the common identity, and spontaneous resilience can occur. He displayed a photograph of “panic” in New York on 9/11 and showed how it wasn’t panic at all. Panic seems to be more a myth than a reality. This has policy implications during an event: provide people with information, and people are more likely to underreact than overreact, if there is overreaction, it’s because people are acting as individuals rather than groups, so those in authority should encourage a sense of collective identity. “Crowds can be part of the solution rather than part of the problem.”

Richard John, University of Southern California (suggested reading: Decision Analysis by Proxy for the Rational Terrorist), talked about the process of social amplification of risk (with respect to terrorism). Events result in relatively small losses; it’s the changes in behavior following an event that result in much greater losses. There’s a dynamic of risk perception, and it’s very contextual. He uses vignettes to study how risk perception changes over time, and discussed some of the studies he’s conducting and ideas for future studies.

Mark Stewart, University of Newcastle, Australia (suggested reading: A risk and cost-benefit assessment of United States aviation security measures; Risk and Cost-Benefit Assessment of Counter-Terrorism Protective Measures to Infrastructure), examines infrastructure security and whether the costs exceed the benefits. He talked about cost/benefit trade-off, and how to apply probabilistic terrorism risk assessment; then, he tried to apply this model to the U.S. Federal Air Marshal Service. His result: they’re not worth it. You can quibble with his data, but the real value is a transparent process. During the discussion, I said that it is important to realize that risks can’t be taken in isolation, that anyone making a security trade-off is balancing several risks: terrorism risks, political risks, the personal risks to his career, etc.

John Adams, University College London (suggested reading: Deus e Brasileiro?; Can Science Beat Terrorism?; Bicycle bombs: a further inquiry), applies his risk thermostat model to terrorism. He presented a series of amusing photographs of overreactions to risk, most of them not really about risk aversion but more about liability aversion. He talked about bureaucratic paranoia, as well as bureaucratic incitements to paranoia, and how this is beginning to backfire. People treat risks differently, depending on whether they are voluntary, impersonal, or imposed, and whether people have total control, diminished control, or no control.

Dan Gardner, Ottawa Citizen (suggested reading: The Science of Fear: Why We Fear the Things We Shouldn’t—and Put Ourselves in Greater Danger), talked about how the media covers risks, threats, attacks, etc. He talked about the various ways the media screws up, all of which were familiar to everyone. His thesis is not that the media gets things wrong in order to increase readership/viewership and therefore profits, but that the media gets things wrong because reporters are human. Bad news bias is not a result of the media hyping bad news, but the natural human tendency to remember the bad more than the good. The evening news is centered around stories because people—including reporters—respond to stories, and stories with novelty, emotion, and drama are better stories.

Some of the discussion was about the nature of panic: whether and where it exists, and what it looks like. Someone from the audience questioned whether panic was related to proximity to the event; someone else pointed out that people very close to the 7/7 bombings took pictures and made phone calls—and that there was no evidence of panic. Also, on 9/11 pretty much everyone below where the airplanes struck the World Trade Center got out safely; and everyone above couldn’t get out, and died. Angela Sasse pointed out that the previous terrorist attack against the World Trade Center, and the changes made in evacuation procedures afterwards, contributed to the lack of panic on 9/11. Bill Burns said that the purest form of panic is a drowning person. Jean Camp asked whether the recent attacks against women’s health providers should be classified as terrorism, or whether we are better off framing it as crime. There was also talk about sky marshals and their effectiveness. I said that it isn’t sky marshals that are a deterrent, but the idea of sky marshals. Terence Taylor said that increasing uncertainty on the part of the terrorists is, in itself, a security measure. There was also a discussion about how risk-averse terrorists are; they seem to want to believe they have an 80% or 90% change of success before they will launch an attack.

Next, lunch—and two final sessions this afternoon.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 12:01 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Why Is Terrorism so Hard?

I don’t know how I missed this great series from Slate in February. It’s eight essays exploring why there have been no follow-on terrorist attacks in the U.S. since 9/11 (not counting the anthrax mailings, I guess). Some excerpts:

Al-Qaida’s successful elimination of the Twin Towers, part of the Pentagon, four jetliners, and nearly 3,000 innocent lives makes the terror group seem, in hindsight, diabolically brilliant. But when you review how close the terrorists came to being exposed by U.S. intelligence, 9/11 doesn’t look like an ingenious plan that succeeded because of shrewd planning. It looks like a stupid plan that succeeded through sheer dumb luck.

[…]

Even when it isn’t linked directly to terrorism, Muslim radicalism seems more prevalent—and certainly more visible—inside the United Kingdom, and in Western Europe generally, than it is inside the United States.

Why the difference? Economics may be one reason. American Muslims are better-educated and wealthier than the average American.

[…]

According to [one] theory, the 9/11 attacks were so stunning a success that they left al-Qaida’s leadership struggling to conceive and carry out an even more fearsome and destructive plan against the United States. In his 2006 book The One Percent Doctrine, journalist Ron Suskind attributes to the U.S. intelligence community the suspicion that “Al Qaeda wouldn’t want to act unless it could top the World Trade Center and the Pentagon with something even more devastating, creating an upward arc of rising and terrible expectation as to what, then, would follow.”

[…]

From a broader policy viewpoint, the Bush administration’s most significant accomplishment, terrorism experts tend to agree, was the 2001 defeat of Afghanistan’s Taliban regime and the destruction of Bin Laden’s training camps. As noted in “The Terrorists-Are-Dumb Theory” and “The Melting Pot Theory,” two-thirds of al-Qaida’s leadership was captured or killed. Journalist Lawrence Wright estimates that nearly 80 percent of al-Qaida’s Afghanistan-based membership was killed in the U.S. invasion, and intelligence estimates suggest al-Qaida’s current membership may be as low as 200 or 300.

[…]

The departing Bush administration’s claim that deposing Saddam Hussein helped prevent acts of terror in the United States has virtually no adherents, except to the extent that it drew some jihadis into Iraq. The Iraq war reduced U.S. standing in the Muslim world, especially when evidence surfaced that U.S. military officials had tortured and humiliated prisoners at the Abu Ghraib prison.

[…]

When Schelling, Abrams, and Sageman argue that terrorists are irrational, what they mean is that terror groups seldom realize their big-picture strategic goals. But Berrebi says you can’t pronounce terrorists irrational until you know what they really want. “We don’t know what are the real goals of each organization,” he says. Any given terror organization is likely to have many competing and perhaps even contradictory goals. Given these groups’ inherently secret nature, outsiders aren’t likely to learn which of these goals is given priority.

Read the whole thing.

Posted on June 3, 2009 at 1:35 PMView Comments

Research on Movie-Plot Threats

This could be interesting:

Emerging Threats and Security Planning: How Should We Decide What Hypothetical Threats to Worry About?

Brian A. Jackson, David R. Frelinger

Concerns about how terrorists might attack in the future are central to the design of security efforts to protect both individual targets and the nation overall. In thinking about emerging threats, security planners are confronted by a panoply of possible future scenarios coming from sources ranging from the terrorists themselves to red-team brainstorming efforts to explore ways adversaries might attack in the future. This paper explores an approach to assessing emerging and/or novel threats and deciding whether—or how much—they should concern security planners by asking two questions: (1) Are some of the novel threats “niche threats” that should be addressed within existing security efforts? (2) Which of the remaining threats are attackers most likely to execute successfully and should therefore be of greater concern for security planners? If threats can reasonably be considered niche threats, they can be prudently addressed in the context of existing security activities. If threats are unusual enough, suggest significant new vulnerabilities, or their probability or consequences means they cannot be considered lesser included cases within other threats, prioritizing them based on their ease of execution provides a guide for which threats merit the greatest concern and most security attention. This preserves the opportunity to learn from new threats yet prevents security planners from being pulled in many directions simultaneously by attempting to respond to every threat at once.

Full paper available here.

Posted on June 1, 2009 at 3:29 PMView Comments

This Week's Terrorism Arrests

Four points. One: There was little danger of an actual terrorist attack:

Authorities said the four men have long been under investigation and there was little danger they could actually have carried out their plan, NBC News’ Pete Williams reported.

[…]

In their efforts to acquire weapons, the defendants dealt with an informant acting under law enforcement supervision, authorities said. The FBI and other agencies monitored the men and provided an inactive missile and inert C-4 to the informant for the defendants, a federal complaint said.

The investigation had been under way for about a year.

“They never got anywhere close to being able to do anything,” one official told NBC News. “Still, it’s good to have guys like this off the street.”

Of course, politicians are using this incident to peddle more fear:

“This was a very serious threat that could have cost many, many lives if it had gone through,” Representative Peter T. King, Republican from Long Island, said in an interview with WPIX-TV. “It would have been a horrible, damaging tragedy. There’s a real threat from homegrown terrorists and also from jailhouse converts.”

Two, they were caught by traditional investigation and intelligence. Not airport security. Not warrantless eavesdropping. But old fashioned investigation and intelligence. This is what works. This is what keeps us safe. Here’s an essay I wrote in 2004 that says exactly that.

The only effective way to deal with terrorists is through old-fashioned police and intelligence work—discovering plans before they’re implemented and then going after the plotters themselves.

Three, they were idiots:

The ringleader of the four-man homegrown terror cell accused of plotting to blow up synagogues in the Bronx and military planes in Newburgh admitted to a judge today that he had smoked pot before his bust last night.

When U.S. Magistrate Judge Lisa M. Smith asked James Cromitie if his judgment was impaired during his appearance in federal court in White Plains, the 55-year-old confessed: “No. I smoke it regularly. I understand everything you are saying.”

Four, an “informant” helped this group a lot:

In April, Mr. Cromitie and the three other men selected the synagogues as their targets, the statement said. The informant soon helped them get the weapons, which were incapable of being fired or detonated, according to the authorities.

The warning the warning I wrote in “Portrait of the Modern Terrorist as an Idiot” is timely again:

Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it’s unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn’t have ordinarily done. The Miami gang’s Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

Actually, that whole 2007 essay is timely again. Some things never change.

Posted on May 22, 2009 at 6:11 AMView Comments

Attacking the Food Supply

Terrorists attacking our food supply is a nightmare scenario that has been given new life during the recent swine flu outbreak. Although it seems easy to do, understanding why it hasn’t happened is important. G.R. Dalziel, at the Nanyang Technological University in Singapore, has written a report chronicling every confirmed case of malicious food contamination in the world since 1950: 365 cases in all, plus 126 additional unconfirmed cases. What he found demonstrates the reality of terrorist food attacks.

It turns out 72% of the food poisonings occurred at the end of the food supply chain—at home—typically by a friend, relative, neighbour, or co-worker trying to kill or injure a specific person. A characteristic example is Heather Mook of York, who in 2007 tried to kill her husband by putting rat poison in his spaghetti.

Most of these cases resulted in fewer than five casualties—Mook only injured her husband in this incident—although 16% resulted in five or more. Of the 19 cases that claimed 10 or more lives, four involved serial killers operating over several years.

Another 23% of cases occurred at the retail or food service level. A 1998 incident in Japan, where someone put arsenic in a curry sold at a summer festival, killing four and hospitalising 63, is a typical example. Only 11% of these incidents resulted in 100 or more casualties, while 44% resulted in none.

There are very few incidents of people contaminating the actual food supply. People deliberately contaminated a water supply seven times, resulting in three deaths. There is only one example of someone deliberately contaminating a crop before harvest—in Australia in 2006—and the crops were recalled before they could be sold. And in the three cases of someone deliberately contaminating food during packaging and distribution, including a 2005 case in the UK where glass and needles were baked into loaves of bread, no one died or was injured.

This isn’t the stuff of bioterrorism. The closest example occurred in 1984 in the US, where members of a religious group known as the Rajneeshees contaminated several restaurant salad bars with salmonella enterica typhimurium, sickening 751, hospitalising 45, but killing no one. In fact, no one knew this was malicious until a year later, when one of the perpetrators admitted it.

Almost all of the food contaminations used conventional poisons such as cyanide, drain cleaner, mercury, or weed killer. There were nine incidents of biological agents, including salmon­ella, ricin, and faecal matter, and eight cases of radiological matter. The 2006 London poisoning of the former KGB agent Alexander Litvinenko with polonium-210 in his tea is an example of the latter.

And that assassination illustrates the real risk of malicious food poisonings. What is discussed in terrorist training manuals, and what the CIA is worried about, is the use of contaminated food in targeted assassinations. The quantities involved for mass poisonings are too great, the nature of the food supply too vast and the details of any plot too complicated and unpredictable to be a real threat. That becomes crystal clear as you read the details of the different incidents: it’s hard to kill one person, and very hard to kill dozens. Hundreds, thousands: it’s just not going to happen any time soon. The fear of bioterror is much greater, and the panic from any bioterror scare will injure more people, than bioterrorism itself.

Far more dangerous are accidental contaminations due to negligent industry practices, such as the 2006 spinach E coli and, more recently, peanut salmonella contaminations in the US, the 2008 milk contaminations in China, and the BSE-infected beef from earlier this decade. And the systems we have in place to deal with these accidental contaminations also work to mitigate any intentional ones.

In 2004, the then US secretary of health and human services, Tommy Thompson, said on Fox News: “I cannot understand why terrorists have not attacked our food supply. Because it is so easy to do.”

Guess what? It’s not at all easy to do.

This essay previously appeared in The Guardian.

Posted on May 14, 2009 at 6:24 AMView Comments

Mathematical Illiteracy

This may be the stupidest example of risk assessment I’ve ever seen. It’s a video clip from a recent Daily Show, about he dangers of the Large Hadron Collider. The segment starts off slow, but then there’s an exchange with high school science teacher Walter L. Wagner, who insists the device has a 50-50 chance of destroying the world:

“If you have something that can happen, and something that won’t necessarily happen, it’s going to either happen or it’s going to not happen, and so the best guess is 1 in 2.”

“I’m not sure that’s how probability works, Walter.”

This is followed by clips of news shows taking the guy seriously.

In related news, almost four-fifths of Americans don’t know that a trillion is a million million, and most think it’s less than that. Is it any wonder why we’re having so much trouble with national budget debates?

Posted on May 4, 2009 at 6:19 AMView Comments

Lessons from the Columbine School Shooting

Lots of high-tech gear, but that’s not what makes schools safe:

Some of the noticeable security measures remain, but experts say the country is exploring a new way to protect kids from in-school violence: administrators now want to foster school communities that essentially can protect themselves with or without the high-tech gear.

“The first and best line of defense is always a well-trained, highly alert staff and student body,” said Kenneth Trump, president of National School Safety and Security Services, an Ohio-based firm specializing in school security.

“The No. 1 way we find out about weapons in schools is not from a piece of equipment [such as a metal detector] but from a kid who comes forward and reports it to an adult that he or she trusts.”

Of course, there never was an epidemic of school shootings—it just seemed that way in the media. And kids are much safer in schools than outside of them.

Posted on April 29, 2009 at 5:57 AMView Comments

1 9 10 11 12 13 21

Sidebar photo of Bruce Schneier by Joe MacInnis.