Interesting research from Sasha Romanosky at RAND:
Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.
The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:
Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.
What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.
Posted on September 29, 2016 at 6:51 AM •
Interesting paper: “Security Collapse of the HTTPS Market.” From the conclusion:
Recent breaches at CAs have exposed several systemic vulnerabilities and market failures inherent in the current HTTPS authentication model: the security of the entire ecosystem suffers if any of the hundreds of CAs is compromised (weakest link); browsers are unable to revoke trust in major CAs (“too big to fail”); CAs manage to conceal security incidents (information asymmetry); and ultimately customers and end users bear the liability and damages of security incidents (negative externalities).
Understanding the market and value chain for HTTPS is essential to address these systemic vulnerabilities. The market is highly concentrated, with very large price differences among suppliers and limited price competition. Paradoxically, the current vulnerabilities benefit rather than hurt the dominant CAs, because among others, they are too big to fail.
Posted on November 28, 2014 at 6:26 AM •
This post by Aleatha Parker-Wood is very applicable to the things I wrote in Liars & Outliers:
A lot of fundamental social problems can be modeled as a disconnection between people who believe (correctly or incorrectly) that they are playing a non-iterated game (in the game theory sense of the word), and people who believe that (correctly or incorrectly) that they are playing an iterated game.
For instance, mechanisms such as reputation mechanisms, ostracism, shaming, etc., are all predicated on the idea that the person you’re shaming will reappear and have further interactions with the group. Legal punishment is only useful if you can catch the person, and if the cost of the punishment is more than the benefit of the crime.
If it is possible to act as if the game you are playing is a one-shot game (for instance, you have a very large population to hide in, you don’t need to ever interact with people again, or you can be anonymous), your optimal strategies are going to be different than if you will have to play the game many times, and live with the legal or social consequences of your actions. If you can make enough money as CEO to retire immediately, you may choose to do so, even if you’re so terrible at running the company that no one will ever hire you again.
Social cohesion can be thought of as a manifestation of how “iterated” people feel their interactions are, how likely they are to interact with the same people again and again and have to deal with long term consequences of locally optimal choices, or whether they feel they can “opt out” of consequences of interacting with some set of people in a poor way.
Posted on May 23, 2013 at 9:18 AM •
Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:
The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.
The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.
Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 22.214.171.124). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.
Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this — they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.
To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus — which causes the DNS proxies to direct their large response messages to Spamhaus.
Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but — you guessed it — the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.
I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.
By the way, a lot of the hype surrounding this attack was media manipulation.
Posted on April 10, 2013 at 12:46 PM •
This informal survey produced the following result: “45% of the users found their email accounts more valuable than their bank accounts.”
The author believes this is evidence of some sophisticated security reasoning on the part of users:
From a security standpoint, I can’t agree more with these people. Email accounts are used most commonly to reset other websites’ account passwords, so if it gets compromised, the others will fall like dominos.
I disagree. I think something a lot simpler is going on. People believe that if their bank account is hacked, the bank will help them clean up the mess and they’ll get their money back. And in most cases, they will. They know that if their e-mail is hacked, all the damage will be theirs to deal with. I think this is public opinion reflecting reality.
Posted on June 26, 2012 at 1:57 PM •
Details are in the article, but here’s the general idea:
Let’s follow the flow of the users:
- Scammer buys user traffic from PornoXo.com and sends it to HQTubeVideos.
- HQTubeVideos loads, in invisible iframes, some parked domains with innocent-sounding names (relaxhealth.com, etc).
- In the parked domains, ad networks serve display and PPC ads.
- The click-fraud sites click on the ads that appear within the parked domains.
- The legitimate publishers gets invisible/fraudulent traffic through the (fraudulently) clicked ads from parked domains.
- Brand advertisers place their ad on the websites of the legitimate publishers, which in reality appear within the (invisible) iframe of HQTubeVideos.
- AdSafe detects the attempted placement within the porn website, and prevents the ads of the brand publisher from appearing in the legitimate website, which is hosted within the invisible frame of the porn site.
Notice how nicely orchestrated is the whole scheme: The parked domains “launder” the porn traffic. The ad networks place the ads in some legitimately-sounding parked domains, not in a porn site. The publishers get traffic from innocent domains such as RelaxHealth, not from porn sites. The porn site loads a variety of publishers, distributing the fraud across many publishers and many advertisers.
The most clever part of this is that it makes use of the natural externalities of the Internet.
And now let’s see who has the incentives to fight this. It is fraud, right? But I think it is well-executed type of fraud. It targets and defrauds the player that has the least incentives to fight the scam.
Who is affected? Let’s follow the money:
- The big brand advertisers (Continental, Coca Cola, Verizon, Vonage,…) pay the publishers and the ad networks for running their campaigns.
- The publishers pay the ad network and the scammer for the fraudulent clicks.
- The scammer pays PornoXo and TrafficHolder for the traffic.
The ad networks see clicks on their ads, they get paid, so not much to worry about. They would worry if their advertisers were not happy. But here we have a piece of genius:
The scammer did not target sites that would measure conversions or cost-per-acquisition. Instead, the scammer was targeting mainly sites that sell pay-per-impression ads and video ads. If the publishers display CPM ads paid by impression, any traffic is good, all impressions count. It is not an accident that the scammer targets publishers with video content, and plenty of pay-per-impression video ads. The publishers have no reason to worry if they get traffic and the cost-per-visit is low.
Effectively, the only one hurt in this chain are the big brand advertisers, who feed the rest of the advertising chain.
Do the big brands care about this type of fraud? Yes and no, but not really deeply. Yes, they pay for some “invisible impressions”. But this is a marketing campaign. In any case, not all marketing attempts are successful. Do all readers of Economist look at the printed ads? Hardly. Do all web users pay attention to the banner ads? I do not think so. Invisible ads are just one of the things that make advertising a little bit more expensive and harder. Consider it part of the cost of doing business. In any case, compared to the overall marketing budget of these behemoths, the cost of such fraud is peanuts.
The big brands do not want their brand to be hurt. If the ads do not appear in places inappropriate for the brand, things are fine. Fighting the fraud publicly? This will just associate the brand with fraud. No marketing department wants that.
Posted on May 22, 2012 at 6:24 AM •
This is a ridiculous overreaction:
The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.
That’s the entire building, a 44-story, 2.5-million-square-foot office building. And why?
The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said “Complaint department: Take a number,” with a number attached to the pin.
It was addressed to someone at one of the financial institutions housed there and discovered by someone in the mail room.
If the grenade had been real, it could have destroyed — what? — a room. Of course, there’s no downside to Brookfield Properties overreacting.
Posted on May 8, 2012 at 7:03 AM •
The University of Pittsburgh has been the recipient of 50 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom — one of the threatened buildings is the 42-story Cathedral of Learning — finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.
Police have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers. (Here is a blog and a
Google Docs spreadsheet documenting the individual threats.)
The University is implementing some pretty annoying security theater in response:
To enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.
I can’t see how this will help, but what else can the University do? Their incentives are such that they’re stuck overreacting. If they ignore the threats and they’re wrong, people will be fired. If they overreact to the threats and they’re wrong, they’ll be forgiven. There’s no incentive to do an actual cost-benefit analysis of the security measures.
For the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.
If you have any information about the bomb threatener, contact the FBI. There’s a $50,000 reward waiting for you. For the university, paying that would be a bargain.
Posted on April 12, 2012 at 1:34 PM •
Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it’s hard to mandate, or even to measure, “security consciousness” from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it’s not likely to be effective unless management’s heart is in it.
This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year’s attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don’t repeat that mistake in future.
I’ve been talking about liabilities for about a decade now. Here are essays I’ve written in 2002, 2003, 2004, and 2006.
Posted on July 27, 2011 at 6:44 AM •
One of the pleasant side effects of being too busy to write longer blog posts is that — if I wait long enough — someone else writes what I would have wanted to.
The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough — and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.
EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.
Posted on June 17, 2011 at 12:09 PM •
Sidebar photo of Bruce Schneier by Joe MacInnis.