Entries Tagged "incentives"

Page 3 of 14

Overreacting to Potential Bombs

This is a ridiculous overreaction:

The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.

That’s the entire building, a 44-story, 2.5-million-square-foot office building. And why?

The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said “Complaint department: Take a number,” with a number attached to the pin.

It was addressed to someone at one of the financial institutions housed there and discovered by someone in the mail room.

If the grenade had been real, it could have destroyed—what?—a room. Of course, there’s no downside to Brookfield Properties overreacting.

Posted on May 8, 2012 at 7:03 AMView Comments

Bomb Threats As a Denial-of-Service Attack

The University of Pittsburgh has been the recipient of 50 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom—one of the threatened buildings is the 42-story Cathedral of Learning—finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.

Police have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers. (Here is a blog and a
Google Docs spreadsheet documenting the individual threats.)

The University is implementing some pretty annoying security theater in response:

To enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.

I can’t see how this will help, but what else can the University do? Their incentives are such that they’re stuck overreacting. If they ignore the threats and they’re wrong, people will be fired. If they overreact to the threats and they’re wrong, they’ll be forgiven. There’s no incentive to do an actual cost-benefit analysis of the security measures.

For the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.

If you have any information about the bomb threatener, contact the FBI. There’s a $50,000 reward waiting for you. For the university, paying that would be a bargain.

Posted on April 12, 2012 at 1:34 PMView Comments

The Dilemma of Counterterrorism Policy

Any institution delegated with the task of preventing terrorism has a dilemma: they can either do their best to prevent terrorism, or they can do their best to make sure they’re not blamed for any terrorist attacks. I’ve talked about this dilemma for a while now, and it’s nice to see some research results that demonstrate its effects.

A. Peter McGraw, Alexander Todorov, and Howard Kunreuther, “A Policy Maker’s Dilemma: Preventing Terrorism or Preventing Blame,” Organizational Behavior and Human Decision Processes, 115 (May 2011): 25-34.

Abstract: Although anti-terrorism policy should be based on a normative treatment of risk that incorporates likelihoods of attack, policy makers’ anti-terror decisions may be influenced by the blame they expect from failing to prevent attacks. We show that people’s anti-terror budget priorities before a perceived attack and blame judgments after a perceived attack are associated with the attack’s severity and how upsetting it is but largely independent of its likelihood. We also show that anti-terror budget priorities are influenced by directly highlighting the likelihood of the attack, but because of outcome biases, highlighting the attack’s prior likelihood has no influence on judgments of blame, severity, or emotion after an attack is perceived to have occurred. Thus, because of accountability effects, we propose policy makers face a dilemma: prevent terrorism using normative methods that incorporate the likelihood of attack or prevent blame by preventing terrorist attacks the public find most blameworthy.

Think about this with respect to the TSA. Are they doing their best to mitigate terrorism, or are they doing their best to ensure that if there’s a terrorist attack the public doesn’t blame the TSA for missing it?

Posted on August 19, 2011 at 8:55 AMView Comments

Court Ruling on "Reasonable" Electronic Banking Security

One of the pleasant side effects of being too busy to write longer blog posts is that—if I wait long enough—someone else writes what I would have wanted to.

The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough—and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.

EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.

Posted on June 17, 2011 at 12:09 PMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

Interview with Me About the Sony Hack

These are what I get for giving interviews when I’m in a bad mood. For the record, I think Sony did a terrible job with its customers’ security. I also think that most companies do a terrible job with customers’ security, simply because there isn’t a financial incentive to do better. And that most of us are pretty secure, despite that.

One of my biggest complaints with these stories is how little actual information we have. We often don’t know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it’s generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it’s impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.

Posted on May 13, 2011 at 11:29 AMView Comments

The Era of "Steal Everything"

Good comment:

“We’re moving into an era of ‘steal everything’,” said David Emm, a senior security researcher for Kaspersky Labs.

He believes that cyber criminals are now no longer just targeting banks or retailers in the search for financial details, but instead going after social and other networks which encourage the sharing of vast amounts of personal information.

As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But—of course—it’s not in the database owner’s interest to limit the data it collects; it’s in the interests of those whom the data is about.

Posted on May 10, 2011 at 6:20 AMView Comments

Changing Incentives Creates Security Risks

One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.

An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.

In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses—and threatened them with termination—for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.

It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.

Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.

The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.

This is not isolated to DC. It has happened elsewhere as well.

Posted on April 14, 2011 at 6:36 AMView Comments

Reducing Bribery by Legalizing the Giving of Bribes

Here’s some very clever thinking from India’s chief economic adviser. In order to reduce bribery, he proposes legalizing the giving of bribes:

Under the current law, discussed in some detail in the next section, once a bribe is given, the bribe giver and the bribe taker become partners in crime. It is in their joint interest to keep this fact hidden from the authorities and to be fugitives from the law, because, if caught, both expect to be punished. Under the kind of revised law that I am proposing here, once a bribe is given and the bribe giver collects whatever she is trying to acquire by giving the money, the interests of the bribe taker and bribe giver become completely orthogonal to each other. If caught, the bribe giver will go scot free and will be able to collect his bribe money back. The bribe taker, on the other hand, loses the booty of bribe and faces a hefty punishment.

Hence, in the post-bribe situation it is in the interest of the bribe giver to have the bribe taker caught. Since the bribe giver will cooperate with the law, the chances are much higher of the bribe taker getting caught. In fact, it will be in the interest of the bribe giver to have the taker get caught, since that way the bribe giver can get back the money she gave as bribe. Since the bribe taker knows this, he will be much less inclined to take the bribe in the first place. This establishes that there will be a drop in the incidence of bribery.

He notes that this only works for a certain class of bribes: when you have to bribe officials for something you are already entitled to receive. It won’t work for any long-term bribery relationship, or in any situation where the briber would otherwise not want the bribe to become public.

News article.

Posted on April 5, 2011 at 8:46 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.