It’s really hard to estimate the cost of an insecure Internet. Studies are all over the map. A methodical study by RAND is the best work I’ve seen at trying to put a number on this. The results are, well, all over the map:
“Estimating the Global Cost of Cyber Risk: Methodology and Examples“:
Abstract: There is marked variability from study to study in the estimated direct and systemic costs of cyber incidents, which is further complicated by the considerable variation in cyber risk in different countries and industry sectors. This report shares a transparent and adaptable methodology for estimating present and future global costs of cyber risk that acknowledges the considerable uncertainty in the frequencies and costs of cyber incidents. Specifically, this methodology (1) identifies the value at risk by country and industry sector; (2) computes direct costs by considering multiple financial exposures for each industry sector and the fraction of each exposure that is potentially at risk to cyber incidents; and (3) computes the systemic costs of cyber risk between industry sectors using Organisation for Economic Co-operation and Development input, output, and value-added data across sectors in more than 60 countries. The report has a companion Excel-based modeling and simulation platform that allows users to alter assumptions and investigate a wide variety of research questions. The authors used a literature review and data to create multiple sample sets of parameters. They then ran a set of case studies to show the model’s functionality and to compare the results against those in the existing literature. The resulting values are highly sensitive to input parameters; for instance, the global cost of cyber crime has direct gross domestic product (GDP) costs of $275 billion to $6.6 trillion and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).
Here’s Rand’s risk calculator, if you want to play with the parameters yourself.
Note: I was an advisor to the project.
Separately, Symantec has published a new cybercrime report with their own statistics.
Posted on January 29, 2018 at 6:18 AM •
Interesting paper. John Scott-Railton on securing the high-risk user.
Posted on November 22, 2016 at 10:16 AM •
Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.
Yahoo! claimed it was a government that did it:
A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.
I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”
Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.
But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.
The second story is from the Wall Street Journal:
InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.
That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”
Posted on September 30, 2016 at 6:16 AM •
Interesting research from Sasha Romanosky at RAND:
Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.
The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:
Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.
What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.
Posted on September 29, 2016 at 6:51 AM •
I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.
Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:
- The President should issue an Executive Order mandating government-wide compliance with the VEP.
- Make the general criteria used to decide whether or not to disclose a vulnerability public.
- Clearly define the VEP.
- Make sure any undisclosed vulnerabilities are reviewed periodically.
- Ensure that the government has the right to disclose any vulnerabilities it purchases.
- Transfer oversight of the VEP from the NSA to the DHS.
- Issue an annual report on the VEP.
- Expand Congressional oversight of the VEP.
- Mandate oversight by other independent bodies inside the Executive Branch.
- Expand funding for both offensive and defensive vulnerability research.
These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.
Posted on July 11, 2016 at 12:15 PM •
The New York Times wrote a good piece comparing airport security around the world, and pointing out that moving the security perimeter doesn’t make any difference if the attack can occur just outside the perimeter. Mark Stewart has the good quote:
“Perhaps the most cost-effective measure is policing and intelligence—to stop them before they reach the target,” Mr. Stewart said.
Sounds like something I would say.
Posted on July 6, 2016 at 9:45 AM •
Interesting research: Mark G. Stewart and John Mueller, “Risk-based passenger screening: risk and economic assessment of TSA PreCheck increased security at reduced cost?”
Executive Summary: The Transportation Security Administration’s PreCheck program is risk-based screening that allows passengers assessed as low risk to be directed to expedited, or PreCheck, screening. We begin by modelling the overall system of aviation security by considering all layers of security designed to deter or disrupt a terrorist plot to down an airliner with a passenger-borne bomb. Our analysis suggests that these measures reduce the risk of such an attack by at least 98%. Assuming that the accuracy of Secure Flight may be less than 100% when identifying low and high risk passengers, we then assess the effect of enhanced and expedited (or regular and PreCheck) screening on deterrence and disruption rates. We also evaluate programs that randomly redirect passengers from the PreCheck to the regular lines (random exclusion) and ones that redirect some passengers from regular to PreCheck lines (managed inclusion). We find that, if 50% of passengers are cleared for PreCheck, the additional risk reduction (benefit) due to PreCheck is 0.021% for attacks by lone wolves, and 0.056% for ones by terrorist organisations. If 75% of passengers rather than 50% go through PreCheck, these numbers are 0.017% and 0.044%, still providing a benefit in risk reduction. Under most realistic combinations of parameter values PreCheck actually increases risk reduction, perhaps up to 1%, while under the worst assumptions, it lowers risk reduction only by some 0.1%. Extensive sensitivity analyses suggests that, overall, PreCheck is most likely to have an increase in overall benefit.
The report also finds that adding random exclusion and managed inclusion to the PreCheck program has little effect on the risk reducing capability of PreCheck one way or the other. For example, if 10% of non-PreCheck passengers are randomly sent to the PreCheck line, the program still is delivers a benefit in risk reduction, and provides an additional savings for TSA of $11 million per year by reducing screening costs—while at the same time improving security outcomes.
There are also other co-benefits, and these are very substantial. Reducing checkpoint queuing times improves in the passenger experience, which would lead to higher airline revenues, can exceed several billion dollars per year. TSA PreCheck thus seems likely to bring considerable efficiencies to the screening process and great benefits to passengers, airports, and airlines while actually enhancing security a bit.
Posted on June 28, 2016 at 2:10 PM •
Interesting research paper: Cormac Herley, “Unfalsifiability of security claims“:
There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.
This is both true and not true.
Mostly, it’s true. It’s true in cryptography, where we can never say that an algorithm is secure. We can either show how it’s insecure, or say something like: all of these smart people have spent lots of hours trying to break it, and they can’t—but we don’t know what a smarter person who spends even more hours analyzing it will come up with. It’s true in things like airport security, where we can easily point out insecurities but are unable to similarly demonstrate that some measures are unnecessary. And this does lead to a ratcheting up on security, in the absence of constraints like budget or processing speed. It’s easier to demand that everyone take off their shoes for special screening, or that we add another four rounds to the cipher, than to argue the reverse.
But it’s not entirely true. It’s difficult, but we can analyze the cost-effectiveness of different security measures. We can compare them with each other. We can make estimations and decisions and optimizations. It’s just not easy, and often it’s more of an art than a science. But all is not lost.
Still, a very good paper and one worth reading.
Posted on May 27, 2016 at 6:19 AM •
This is good:
Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.
We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.
No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.
I am reminded of my own 2006 “Refuse to be Terrorized” essay.
Posted on April 3, 2016 at 7:42 PM •
Sidebar photo of Bruce Schneier by Joe MacInnis.