Blog: September 2016 Archives
Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.
Yahoo! claimed it was a government that did it:
A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.
I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”
Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.
But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.
The second story is from the Wall Street Journal:
InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.
That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”
Interesting research from Sasha Romanosky at RAND:
Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.
The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:
Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.
What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.
A new malware tries to detect if it’s running in a virtual machine or sandboxed test environment by looking for signs of normal use and not executing if they’re not there.
From a news article:
A typical test environment consists of a fresh Windows computer image loaded into a VM environment. The OS image usually lacks documents and other telltale signs of real world use, Fenton said. The malware sample that Fenton found…looks for existing documents on targeted PCs.
If no Microsoft Word documents are found, the VBA macro code execution terminates, shielding the malware from automated analysis and detection. Alternately, if more than two Word documents are found on the targeted system, the macro will download and install the malware payload.
EDITED TO ADD (10/16): Some details.
Neural networks are good at identifying faces, even if they’re blurry:
In a paper released earlier this month, researchers at UT Austin and Cornell University demonstrate that faces and objects obscured by blurring, pixelation, and a recently-proposed privacy system called P3 can be successfully identified by a neural network trained on image datasets — in some cases at a more consistent rate than humans.
“We argue that humans may no longer be the ‘gold standard’ for extracting information from visual data,” the researchers write. “Recent advances in machine learning based on artificial neural networks have led to dramatic improvements in the state of the art for automated image recognition. Trained machine learning models now outperform humans on tasks such as object recognition and determining the geographic location of an image.”
The vulnerability has been fixed.
Remember, a modern car isn’t an automobile with a computer in it. It’s a computer with four wheels and an engine. Actually, it’s a distributed 20-400-computer system with four wheels and an engine.
Sidebar photo of Bruce Schneier by Joe MacInnis.