Entries Tagged "risks"

Page 9 of 16

Risk Tolerance and Culture

This is an interesting study on cultural differences in risk tolerance.

The Cultures of Risk Tolerance

Abstract: This study explores the links between culture and risk tolerance, based on surveys conducted in 23 countries. Altogether, more than 4,000 individuals participated in the surveys. Risk tolerance is associated with culture. Risk tolerance is relatively low in countries where uncertainty avoidance is relatively high and in countries which are relatively individualistic. Risk tolerance is also relatively low in countries which are relatively egalitarian and harmonious. And risk tolerance is relatively high in countries where trust is relatively high. Culture is also associated with risk tolerance indirectly, through the association between culture and income-per-capita. People in countries with relatively high income-per-capita tend to be relatively individualistic, egalitarian, and trusting. Risk tolerance is relatively high in countries with relatively low income-per-capita.

Posted on September 14, 2011 at 2:02 PMView Comments

TSA Administrator John Pistole on the Future of Airport Security

There’s a lot here that’s worth watching. He talks about expanding behavioral detection. He talks about less screening for “trusted travelers.”

So, what do the next 10 years hold for transportation security? I believe it begins with TSA’s continued movement toward developing and implementing a more risk-based security system, a phrase you may have heard the last few months. When I talk about risk-based, intelligence-driven security it’s important to note that this is not about a specific program per se, or a limited initiative being evaluated at a handful of airports.

On the contrary, risk-based security is much more comprehensive. It means moving further away from what may have seemed like a one-size-fits-all approach to security. It means focusing our agency’s resources on those we know the least about, and using intelligence in better ways to inform the screening process.

[…]

Another aspect of our risk-based, intelligence-driven security system is the trusted traveler proof-of-concept that will begin this fall. As part of this proof-of-concept, we are looking at how to expedite the screening process for travelers we know and trust the most, and travelers who are willing to voluntarily share more information with us before they travel. Doing so will then allow our officers to more effectively prioritize screening and focus our resources on those passengers we know the least about and those of course on watch lists.

[…]

We’re also working with airlines already testing a known-crewmember concept, and we are evaluating changes to the security screening process for children 12-and-under. Both of these concepts reflect the principles of risk-based security, considering that airline pilots are among our country’s most trusted travelers and the preponderance of intelligence indicates that children 12-and-under pose little risk to aviation security.

Finally, we are also evaluating the value of expanding TSA’s behavior detection program, to help our officers identify people exhibiting signs that may indicate a potential threat. This reflects an expansion of the agency’s existing SPOT program, which was developed by adapting global best practices. This effort also includes additional, specialized training for our organization’s Behavior Detection Officers and is currently being tested at Boston’s Logan International airport, where the SPOT program was first introduced.

Posted on September 14, 2011 at 6:55 AMView Comments

The Problem with Using the Cold War Metaphor to Describe Cyberspace Risks

Nice essay on the problems with talking about cyberspace risks using “Cold War” metaphors:

The problem with threat inflation and misapplied history is that there are extremely serious risks, but also manageable responses, from which they steer us away. Massive, simultaneous, all-encompassing cyberattacks on the power grid, the banking system, transportation networks, etc. along the lines of a Cold War first strike or what Defense Secretary Leon Panetta has called the “next Pearl Harbor” (another overused and ill-suited analogy) would certainly have major consequences, but they also remain completely theoretical, and the nation would recover. In the meantime, a real national security danger is being ignored: the combination of online crime and espionage that’s gradually undermining our finances, our know-how and our entrepreneurial edge. While would-be cyber Cold Warriors stare at the sky and wait for it to fall, they’re getting their wallets stolen and their offices robbed.

[….]

If the most apt parallel is not the Cold War, then what are some alternatives we could turn to for guidance, especially when it comes to the problem of building up international cooperation in this space? Cybersecurity’s parallels, and some of its solutions, lie more in the 1840s and ’50s than they do in the 1940s and ’50s.

Much like the Internet is becoming today, in centuries past the sea was a primary domain of commerce and communication upon which no one single actor could claim complete control. What is notable is that the actors that related to maritime security and war at sea back then parallel many of the situations on our networks today. They scaled from individual pirates to state fleets with a global presence like the British Navy. In between were state-sanctioned pirates, or privateers. Much like today’s “patriotic hackers” (or NSA contractors), these forces were used both to augment traditional military forces and to add challenges of attribution to those trying to defend far-flung maritime assets. In the Golden Age of privateering, an attacker could quickly shift identity and locale, often taking advantage of third-party harbors with loose local laws. The actions that attacker might take ranged from trade blockades (akin to a denial of service) to theft and hijacking to actual assaults on military assets or underlying economic infrastructure to great effect.

Ross Anderson is the first person I heard comparing today’s cybercrime threats to global piracy in the 19th century.

Posted on August 26, 2011 at 1:58 PMView Comments

Terrorism in the U.S. Since 9/11

John Mueller and his students analyze the 33 cases of attempted [EDITED TO ADD: Islamic extremist] terrorism in the U.S. since 9/11. So few of them are actually real, and so many of them were created or otherwise facilitated by law enforcement.

The death toll of all these is fourteen: thirteen at Ft. Hood and one in Little Rock. I think it’s fair to add to this the 2002 incident at Los Angeles Airport where a lone gunman killed two people at the El Al ticket counter, so that’s sixteen deaths in the U.S. to terrorism in the past ten years.

Given the credible estimate that we’ve spent $1 trillion on anti-terrorism security (this does not include our many foreign wars), that’s $62.5 billion per life [EDITED: lost]. Is there any other risk that we are even remotely as crazy about?

Note that everyone who died was shot with a gun. No Islamic extremist has been able to successfully detonate a bomb in the U.S. in the past ten years, not even a Molotov cocktail. (In the U.K. there has only been one successful terrorist bombing in the last ten years; the 2005 London Underground attacks.) And almost all of the 33 incidents (34 if you add LAX) have been lone actors, with no ties to al Qaeda.

I remember the government fear mongering after 9/11. How there were hundreds of sleeper cells in the U.S. How terrorism would become the new normal unless we implemented all sorts of Draconian security measures. You’d think that—if this were even remotely true—we would have seen more attempted terrorism in the U.S. over the past decade.

And I think arguments like “the government has secretly stopped lots of plots” don’t hold any water. Just look at the list, and remember how the Bush administration would hype even the most tenuous terrorist incident. Stoking fear was the policy. If the government stopped any other plots, they would have made as much of a big deal of them as they did of these 33 incidents.

EDITED TO ADD (8/26): According to the State Department’s recent report, fifteen American private citizens died in terrorist attacks in 2010: thirteen in Afghanistan and one each in Iraq and Uganda. Worldwide, 13,186 people died from terrorism in 2010. These numbers pale even in comparison to things that aren’t very risky.

Here’s data on incidents from 1970 to 2004. And here’s Nate Silver with data showing that the 1970s and 1980s were more dangerous with respect to airplane terrorism than the 2000s.

Also, look at Table 3 on page 16. The risk of dying in the U.S. from terrorism is substantially less than the risk of drowning in your bathtub, the risk of a home appliance killing you, or the risk of dying in an accident caused by a deer. Remember that more people die every month in automobile crashes than died in 9/11.

EDITED TO ADD (8/26): Looking over the incidents again, some of them would make pretty good movie plots. The point of my “movie-plot threat” phrase is not that terrorist attacks are never like that, but that concentrating defensive resources against them is pointless because 1) there are too many of them and 2) it is too easy for the terrorists to change tactics or targets.

EDITED TO ADD (9/1): As was pointed out here, I accidentally typed “lives saved” when I meant to type “lives lost.” I corrected that, above. We generally have a regulatory safety goal of $1 – $10M per life saved. In order for the $100B we have spent per year on counterterrorism to be worth it, it would need to have saved 10,000 lives per year.

Posted on August 26, 2011 at 6:26 AMView Comments

"Taxonomy of Operational Cyber Security Risks"

I’m a big fan of taxonomies, and this—from Carnegie Mellon—seems like a useful one:

The taxonomy of operational cyber security risks, summarized in Table 1 and detailed in this section, is structured around a hierarchy of classes, subclasses, and elements. The taxonomy has four main classes:

  • actions of people—action, or lack of action, taken by people either deliberately or accidentally that impact cyber security
  • systems and technology failures—failure of hardware, software, and information systems
  • failed internal processes—problems in the internal business processes that impact the ability to implement, manage, and sustain cyber security, such as process design, execution, and control
  • external events—issues often outside the control of the organization, such as disasters, legal issues, business issues, and service provider dependencies

Each of these four classes is further decomposed into subclasses, and each subclass is described by its elements.

Posted on August 10, 2011 at 6:39 AMView Comments

Revenge Effects of Too-Safe Playground Equipment

Sometimes too much security isn’t good.

After observing children on playgrounds in Norway, England and Australia, Dr. Sandseter identified six categories of risky play: exploring heights, experiencing high speed, handling dangerous tools, being near dangerous elements (like water or fire), rough-and-tumble play (like wrestling), and wandering alone away from adult supervision. The most common is climbing heights.

“Climbing equipment needs to be high enough, or else it will be too boring in the long run,” Dr. Sandseter said. “Children approach thrills and risks in a progressive manner, and very few children would try to climb to the highest point for the first time they climb. The best thing is to let children encounter these challenges from an early age, and they will then progressively learn to master them through their play over the years.”

[…]

By gradually exposing themselves to more and more dangers on the playground, children are using the same habituation techniques developed by therapists to help adults conquer phobias, according to Dr. Sandseter and a fellow psychologist, Leif Kennair, of the Norwegian University for Science and Technology.

“Risky play mirrors effective cognitive behavioral therapy of anxiety,” they write in the journal Evolutionary Psychology, concluding that this “anti-phobic effect” helps explain the evolution of children’s fondness for thrill-seeking. While a youthful zest for exploring heights might not seem adaptive—why would natural selection favor children who risk death before they have a chance to reproduce?—the dangers seemed to be outweighed by the benefits of conquering fear and developing a sense of mastery.

Posted on July 25, 2011 at 1:06 PMView Comments

Is There a Hacking Epidemic?

Freakonomics asks: “Why has there been such a spike in hacking recently? Or is it merely a function of us paying closer attention and of institutions being more open about reporting security breaches?”

They posted five answers, including mine:

The apparent recent hacking epidemic is more a function of news reporting than an actual epidemic. Like shark attacks or school violence, natural fluctuations in data become press epidemics, as more reporters write about more events, and more people read about them. Just because the average person reads more articles about more events doesn’t mean that there are more events—just more articles.

Hacking for fun—like LulzSec—has been around for decades. It’s where hacking started, before criminals discovered the Internet in the 1990s. Criminal hacking for profit—like the Citibank hack—has been around for over a decade. International espionage existed for millennia before the Internet, and has never taken a holiday.

The past several months have brought us a string of newsworthy hacking incidents. First there was the hacking group Anonymous, and its hacktivism attacks as a response to the pressure to interdict contributions to Julian Assange‘s legal defense fund and the torture of Bradley Manning. Then there was the probably espionage-related attack against RSA, Inc. and its authentication token—made more newsworthy because of the bungling of the disclosure by the company—and the subsequent attack against Lockheed Martin. And finally, there were the very public attacks against Sony, which became the company to attack simply because everyone else was attacking it, and the public hacktivism by LulzSec.

None of this is new. None of this is unprecedented. To a security professional, most of it isn’t even interesting. And while national intelligence organizations and some criminal groups are organized, hacker groups like Anonymous and LulzSec are much more informal. Despite the impression we get from movies, there is no organization. There’s no membership, there are no dues, there is no initiation. It’s just a bunch of guys. You too can join Anonymous—just hack something, and claim you’re a member. That’s probably what the members of Anonymous arrested in Turkey were: 32 people who just decided to use that name.

It’s not that things are getting worse; it’s that things were always this bad. To a lot of security professionals, the value of some of these groups is to graphically illustrate what we’ve been saying for years: organizations need to beef up their security against a wide variety of threats. But the recent news epidemic also illustrates how safe the Internet is. Because news articles are the only contact most of us have had with any of these attacks.

Posted on July 21, 2011 at 6:07 AMView Comments

The Problem with Cyber-crime Surveys

Good paper: “Sex, Lies and Cyber-crime Surveys,” Dinei Florêncio and Cormac Herley, Microsoft Research.

Abstract: Much of the information we have on cyber-crime losses is derived from surveys. We examine some of the difficulties of forming an accurate estimate by survey. First, losses are extremely concentrated, so that representative sampling of the population does not give representative sampling of the losses. Second, losses are based on unverified self-reported numbers. Not only is it possible for a single outlier to distort the result, we find evidence that most surveys are dominated by a minority of responses in the upper tail (i.e., a majority of the estimate is coming from as few as one or two responses). Finally, the fact that losses are confined to a small segment of the population magnifies the difficulties of refusal rate and small sample sizes. Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N=1000 person survey, is all it takes to generate a $10 billion loss over the population. One unverified claim of $7,500 in phishing losses translates into $1.5 billion.

I’ve been complaining about our reliance on self-reported statistics for cyber-crime.

Posted on June 21, 2011 at 5:58 AMView Comments

Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PMView Comments

1 7 8 9 10 11 16

Sidebar photo of Bruce Schneier by Joe MacInnis.