Entries Tagged "incentives"

Page 3 of 14

Security Externalities and DDOS Attacks

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 209.20.73.44). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this—they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus—which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but—you guessed it—the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.

By the way, a lot of the hype surrounding this attack was media manipulation.

Posted on April 10, 2013 at 12:46 PMView Comments

Getting Security Incentives Right

One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn’t matter how much management tells employees that security is important, employees know when it really isn’t—when getting the job done cheaply and on schedule is much more important.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

Similarly, there’s a supposedly an old Chinese proverb that goes “hang one, warn a thousand.” Or to put it another way, we’re really good at risk management. And there’s John Byng, whose execution gave rise to the Voltaire quote (in French): “in this country, it is good to kill an admiral from time to time, in order to encourage the others.”

I thought of all this when I read about the new security procedures surrounding the upcoming papal election:

According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.

“I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff,” the oath reads.

“I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication ‘latae sententiae’, which is reserved to the Apostolic See,” it continues.

Excommunication is like being fired, only it lasts for eternity.

I’m not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots—a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while—there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.

Posted on March 4, 2013 at 6:38 AMView Comments

Overreacting to Potential Bombs

This is a ridiculous overreaction:

The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.

That’s the entire building, a 44-story, 2.5-million-square-foot office building. And why?

The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said “Complaint department: Take a number,” with a number attached to the pin.

It was addressed to someone at one of the financial institutions housed there and discovered by someone in the mail room.

If the grenade had been real, it could have destroyed—what?—a room. Of course, there’s no downside to Brookfield Properties overreacting.

Posted on May 8, 2012 at 7:03 AMView Comments

Bomb Threats As a Denial-of-Service Attack

The University of Pittsburgh has been the recipient of 50 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom—one of the threatened buildings is the 42-story Cathedral of Learning—finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.

Police have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers. (Here is a blog and a
Google Docs spreadsheet documenting the individual threats.)

The University is implementing some pretty annoying security theater in response:

To enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.

I can’t see how this will help, but what else can the University do? Their incentives are such that they’re stuck overreacting. If they ignore the threats and they’re wrong, people will be fired. If they overreact to the threats and they’re wrong, they’ll be forgiven. There’s no incentive to do an actual cost-benefit analysis of the security measures.

For the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.

If you have any information about the bomb threatener, contact the FBI. There’s a $50,000 reward waiting for you. For the university, paying that would be a bargain.

Posted on April 12, 2012 at 1:34 PMView Comments

The Dilemma of Counterterrorism Policy

Any institution delegated with the task of preventing terrorism has a dilemma: they can either do their best to prevent terrorism, or they can do their best to make sure they’re not blamed for any terrorist attacks. I’ve talked about this dilemma for a while now, and it’s nice to see some research results that demonstrate its effects.

A. Peter McGraw, Alexander Todorov, and Howard Kunreuther, “A Policy Maker’s Dilemma: Preventing Terrorism or Preventing Blame,” Organizational Behavior and Human Decision Processes, 115 (May 2011): 25-34.

Abstract: Although anti-terrorism policy should be based on a normative treatment of risk that incorporates likelihoods of attack, policy makers’ anti-terror decisions may be influenced by the blame they expect from failing to prevent attacks. We show that people’s anti-terror budget priorities before a perceived attack and blame judgments after a perceived attack are associated with the attack’s severity and how upsetting it is but largely independent of its likelihood. We also show that anti-terror budget priorities are influenced by directly highlighting the likelihood of the attack, but because of outcome biases, highlighting the attack’s prior likelihood has no influence on judgments of blame, severity, or emotion after an attack is perceived to have occurred. Thus, because of accountability effects, we propose policy makers face a dilemma: prevent terrorism using normative methods that incorporate the likelihood of attack or prevent blame by preventing terrorist attacks the public find most blameworthy.

Think about this with respect to the TSA. Are they doing their best to mitigate terrorism, or are they doing their best to ensure that if there’s a terrorist attack the public doesn’t blame the TSA for missing it?

Posted on August 19, 2011 at 8:55 AMView Comments

Court Ruling on "Reasonable" Electronic Banking Security

One of the pleasant side effects of being too busy to write longer blog posts is that—if I wait long enough—someone else writes what I would have wanted to.

The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough—and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.

EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.

Posted on June 17, 2011 at 12:09 PMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

Interview with Me About the Sony Hack

These are what I get for giving interviews when I’m in a bad mood. For the record, I think Sony did a terrible job with its customers’ security. I also think that most companies do a terrible job with customers’ security, simply because there isn’t a financial incentive to do better. And that most of us are pretty secure, despite that.

One of my biggest complaints with these stories is how little actual information we have. We often don’t know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it’s generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it’s impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.

Posted on May 13, 2011 at 11:29 AMView Comments

The Era of "Steal Everything"

Good comment:

“We’re moving into an era of ‘steal everything’,” said David Emm, a senior security researcher for Kaspersky Labs.

He believes that cyber criminals are now no longer just targeting banks or retailers in the search for financial details, but instead going after social and other networks which encourage the sharing of vast amounts of personal information.

As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But—of course—it’s not in the database owner’s interest to limit the data it collects; it’s in the interests of those whom the data is about.

Posted on May 10, 2011 at 6:20 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.