Entries Tagged "breaches"

Page 4 of 5

On the Equifax Data Breach

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It’s an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver’s license numbers—exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it’s happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can’t fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn’t notice, you’re not Equifax’s customer. You’re its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer—everyone who wants to sell you something, even governments.

It’s not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you—almost all of them companies you’ve never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You’re secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don’t have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations—just in case I ever decide to join.

I also don’t have a Gmail account, because I don’t want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can’t even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if newperson@company.com is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can’t opt out. Sometimes it’s a company like Equifax that doesn’t answer to us in any way. Sometimes it’s a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it’s our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that’s not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don’t need to keep it secure in order to maintain their market share. They don’t have to answer to us, their products. They know it’s more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it’s a huge black eye for the company—this week. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn’t unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax’s data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of “unfair and deceptive trade practices,” there’s nothing it can do. Perhaps there will be a class-action lawsuit, but because it’s hard to draw a line between any of the many data breaches you’re subjected to and a specific harm, courts are not likely to side with you.

If you don’t like how careless Equifax was with your data, don’t waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on CNN.com.

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn’t know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don’t remember which radio show interviewed me. I kind of hope it didn’t air.

Posted on September 13, 2017 at 12:49 PMView Comments

Data Is a Toxic Asset

Thefts of personal information aren’t unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it’s information that’s needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it’s stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it’s personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that’s been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including £60 million (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I’m not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase “big data” refers to the idea that large databases of seemingly random data about people are valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there’s no reason not to save as much as possible, and save it all forever. Figuring out what isn’t worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it’s highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­—including who we spend the night with.

Our Internet search data reveals what’s important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that’s why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it’s one of the assets that gets sold.

Saving it is dangerous because it’s hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company’s profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­—in some cases—­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company’s computers and networks. The data is vulnerable, and the company is vulnerable. It’s vulnerable to hackers and governments. It’s vulnerable to employee error. And when there’s a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we’re in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn’t necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don’t realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there’s certainly a lot that companies can do technically to better secure the data they hold about all of us, there’s no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there’s a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers’ real names and credit card numbers. It didn’t have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn’t charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn’t have to store everyone’s information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn’t be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on CNN.com.

Posted on March 4, 2016 at 5:32 AMView Comments

Good Article on the Sony Attack

Fortune has a threepart article on the Sony attack by North Korea. There’s not a lot of tech here; it’s mostly about Sony’s internal politics regarding the movie and IT security before the attack, and some about their reaction afterwards.

Despite what I wrote at the time, I now believe that North Korea was responsible for the attack. This is the article that convinced me. It’s about the US government’s reaction to the attack.

Posted on September 28, 2015 at 6:22 AMView Comments

Hacking Team, Computer Vulnerabilities, and the NSA

When the National Security Administration (NSA)—or any government agency—discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others’ computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks. The two options are mutually exclusive: either we can help to secure both our own networks and the systems we might want to attack, or we can keep both networks vulnerable. Many, myself included, have long argued that defense is more important than offense, and that we should patch almost every vulnerability we find. Even the President’s Review Group on Intelligence and Communications Technologies recommended in 2013 that “U.S. policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on U.S. Government and other networks.”

Both the NSA and the White House have talked about a secret “vulnerability equities process” they go through when they find a security flaw. Both groups maintain the process is heavily weighted in favor or disclosing vulnerabilities to the vendors and having them patched.

An undated document—declassified last week with heavy redactions after a year-long Freedom of Information Act lawsuit—shines some light on the process but still leaves many questions unanswered. An important question is: which vulnerabilities go through the equities process, and which don’t?

A real-world example of the ambiguity surrounding the equities process emerged from the recent hacking of the cyber weapons arms manufacturer Hacking Team. The corporation sells Internet attack and espionage software to countries around the world, including many reprehensible governments to allow them to eavesdrop on their citizens, sometimes as a prelude to arrest and torture. The computer tools were used against U.S. journalists.

In July, unidentified hackers penetrated Hacking Team’s corporate network and stole almost everything of value, including corporate documents, e-mails, and source code. The hackers proceeded to post it all online.

The NSA was most likely able to penetrate Hacking Team’s network and steal the same data. The agency probably did it years ago. They would have learned the same things about Hacking Team’s network software that we did in July: how it worked, what vulnerabilities they were using, and which countries were using their cyber weapons. Armed with that knowledge, the NSA could have quietly neutralized many of the company’s products. The United States could have alerted software vendors about the zero-day exploits and had them patched. It could have told the antivirus companies how to detect and remove Hacking Team’s malware. It could have done a lot. Assuming that the NSA did infiltrate Hacking Team’s network, the fact that the United States chose not to reveal the vulnerabilities it uncovered is both revealing and interesting, and the decision provides a window into the vulnerability equities process.

The first question to ask is why? There are three possible reasons. One, the software was also being used by the United States, and the government did not want to lose its benefits. Two, NSA was able to eavesdrop on other entities using Hacking Team’s software, and they wanted to continue benefitting from the intelligence. And three, the agency did not want to expose their own hacking capabilities by demonstrating that they had compromised Hacking Team’s network. In reality, the decision may have been due to a combination of the three possibilities.

How was this decision made? More explicitly, did any vulnerabilities that Hacking Team exploited, and the NSA was aware of, go through the vulnerability equities process? It is unclear. The NSA plays fast and loose when deciding which security flaws go through the procedure. The process document states that it applies to vulnerabilities that are “newly discovered and not publicly known.” Does that refer only to vulnerabilities discovered by the NSA, or does the process also apply to zero-day vulnerabilities that the NSA discovers others are using? If vulnerabilities used in others’ cyber weapons are excluded, it is very difficult to talk about the process as it is currently formulated.

The U.S. government should close the vulnerabilities that foreign governments are using to attack people and networks. If taking action is as easy as plugging security vulnerabilities in products and making everyone in the world more secure, that should be standard procedure. The fact that the NSA—we assume—chose not to suggests that the United States has its priorities wrong.

Undoubtedly, there would be blowback from closing vulnerabilities utilized in others’ cyber weapons. Several companies sell information about vulnerabilities to different countries, and if they found that those security gaps were regularly closed soon after they started trying to sell them, they would quickly suspect espionage and take more defensive precautions. The new wariness of sellers and decrease in available security flaws would also raise the price of vulnerabilities worldwide. The United States is one of the biggest buyers, meaning that we benefit from greater availability and lower prices.

If we assume the NSA has penetrated these companies’ networks, we should also assume that the intelligence agencies of countries like Russia and China have done the same. Are those countries using Hacking Team’s vulnerabilities in their cyber weapons? We are all embroiled in a cyber arms race—finding, buying, stockpiling, using, and exposing vulnerabilities—and our actions will affect the actions of all the other players.

It seems foolish that we would not take every opportunity to neutralize the cyberweapons of those countries that would attack the United States or use them against their own people for totalitarian gain. Is it truly possible that when the NSA intercepts and reverse-engineers a cyberweapon used by one of our enemies—whether a Hacking Team customer or a country like China—we don’t close the vulnerabilities that that weapon uses? Does the NSA use knowledge of the weapon to defend the U.S. government networks whose security it maintains, at the expense of everyone else in the country and the world? That seems incredibly dangerous.

In my book Data and Goliath, I suggested breaking apart the NSA’s offensive and defensive components, in part to resolve the agency’s internal conflict between attack and defense. One part would be focused on foreign espionage, and another on cyberdefense. This Hacking Team discussion demonstrates that even separating the agency would not be enough. The espionage-focused organization that penetrates and analyzes the products of cyberweapons arms manufacturers would regularly learn about vulnerabilities used to attack systems and networks worldwide. Thus, that section of the agency would still have to transfer that knowledge to the defense-focused organization. That is not going to happen as long as the United States prioritizes surveillance over security and attack over defense. The norms governing actions in cyberspace need to be changed, a task far more difficult than any reform of the NSA.

This essay previously appeared in the Georgetown Journal of International Affairs.

EDITED TO ADD: Hacker News thread.

Posted on September 15, 2015 at 6:38 AMView Comments

The Security Risks of Third-Party Data

Most of us get to be thoroughly relieved that our e-mails weren’t in the Ashley Madison database. But don’t get too comfortable. Whatever secrets you have, even the ones you don’t think of as secret, are more likely than you think to get dumped on the Internet. It’s not your fault, and there’s largely nothing you can do about it.

Welcome to the age of organizational doxing.

Organizational doxing—stealing data from an organization’s network and indiscriminately dumping it all on the Internet—is an increasingly popular attack against organizations. Because our data is connected to the Internet, and stored in corporate networks, we are all in the potential blast-radius of these attacks. While the risk that any particular bit of data gets published is low, we have to start thinking about what could happen if a larger-scale breach affects us or the people we care about. It’s going to get a lot uglier before security improves.

We don’t know why anonymous hackers broke into the networks of Avid Life Media, then stole and published 37 million—so far—personal records of AshleyMadison.com users. The hackers say it was because of the company’s deceptive practices. They expressed indifference to the “cheating dirtbags” who had signed up for the site. The primary target, the hackers said, was the company itself. That philanderers were exposed, marriages were ruined, and people were driven to suicide was apparently a side effect.

Last November, the North Korean government stole and published gigabytes of corporate e-mail from Sony Pictures. This was part of a much larger doxing—a hack aimed at punishing the company for making a movie parodying the North Korean leader Kim Jong-un. The press focused on Sony’s corporate executives, who had sniped at celebrities and made racist jokes about President Obama. But also buried in those e-mails were loves, losses, confidences, and private conversations of thousands of innocent employees. The press didn’t bother with those e-mails—and we know nothing of any personal tragedies that resulted from their friends’ searches. They, too, were caught in the blast radius of the larger attack.

The Internet is more than a way for us to get information or connect with our friends. It has become a place for us to store our personal information. Our e-mail is in the cloud. So are our address books and calendars, whether we use Google, Apple, Microsoft, or someone else. We store to-do lists on Remember the Milk and keep our jottings on Evernote. Fitbit and Jawbone store our fitness data. Flickr, Facebook, and iCloud are the repositories for our personal photos. Facebook and Twitter store many of our intimate conversations.

It often feels like everyone is collecting our personal information. Smartphone apps collect our location data. Google can draw a surprisingly intimate portrait of what we’re thinking about from our Internet searches. Dating sites (even those less titillating than Ashley Madison), medical-information sites, and travel sites all have detailed portraits of who we are and where we go. Retailers save records of our purchases, and those databases are stored on the Internet. Data brokers have detailed dossiers that can include all of this and more.

Many people don’t think about the security implications of this information existing in the first place. They might be aware that it’s mined for advertising and other marketing purposes. They might even know that the government can get its hands on such data, with different levels of ease depending on the country. But it doesn’t generally occur to people that their personal information might be available to anyone who wants to look.

In reality, all these networks are vulnerable to organizational doxing. Most aren’t any more secure than Ashley Madison or Sony were. We could wake up one morning and find detailed information about our Uber rides, our Amazon purchases, our subscriptions to pornographic websites—anything we do on the Internet—published and available. It’s not likely, but it’s certainly possible.

Right now, you can search the Ashley Madison database for any e-mail address, and read that person’s details. You can search the Sony data dump and read the personal chatter of people who work for the company. Tempting though it may be, there are many reasons not to search for people you know on Ashley Madison. The one I most want to focus on is context. An e-mail address might be in that database for many reasons, not all of them lascivious. But if you find your spouse or your friend in there, you don’t necessarily know the context. It’s the same with the Sony employee e-mails, and the data from whatever company is doxed next. You’ll be able to read the data, but without the full story, it can be hard to judge the meaning of what you’re reading.

Even so, of course people are going to look. Reporters will search for public figures. Individuals will search for people they know. Secrets will be read and passed around. Anguish and embarrassment will result. In some cases, lives will be destroyed.

Privacy isn’t about hiding something. It’s about being able to control how we present ourselves to the world. It’s about maintaining a public face while at the same time being permitted private thoughts and actions. It’s about personal dignity.

Organizational doxing is a powerful attack against organizations, and one that will continue because it’s so effective. And while the network owners and the hackers might be battling it out for their own reasons, sometimes it’s our data that’s the prize. Having information we thought private turn out to be public and searchable is what happens when the hackers win. It’s a result of the information age that hasn’t been fully appreciated, and one that we’re still not prepared to face.

This essay previously appeared on the Atlantic.

Posted on September 9, 2015 at 8:42 AMView Comments

Are Data Breaches Getting Larger?

This research says that data breaches are not getting larger over time.

Hype and Heavy Tails: A Closer Look at Data Breaches,” by Benjamin Edwards, Steven Hofmeyr, and Stephanie Forrest:

Abstract: Recent widely publicized data breaches have exposed the
personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion.

The paper was presented at WEIS 2015.

Posted on August 25, 2015 at 6:27 AMView Comments

More on Hacking Team

Read this:

Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down­—something that even customers aren’t told about.

To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.

It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads. I don’t think the company is going to survive this.

Posted on July 7, 2015 at 5:30 PMView Comments

Hacking Team Is Hacked

Someone hacked the cyberweapons arms manufacturer Hacking Team and posted 400 GB of internal company data.

Hacking Team is a pretty sleazy company, selling surveillance software to all sorts of authoritarian governments around the world. Reporters Without Borders calls it one of the enemies of the Internet. Citizen Lab has published many reports about their activities.

It’s a huge trove of data, including a spreadsheet listing every government client, when they first bought the surveillance software, and how much money they have paid the company to date. Not surprising, the company has been lying about who its customers are. Chris Soghoian has been going through the data and tweeting about it. More Twitter comments on the data here. Here are articles from Wired and The Guardian.

Here’s the torrent, if you want to look at the data yourself. (Here’s another mirror.) The source code is up on Github.

I expect we’ll be sifting through all the data for a while.

Slashdot thread. Hacker News thread.

EDITED TO ADD: The Hacking Team CEO, David Vincenzetti, doesn’t like me:

In another [e-mail], the Hacking Team CEO on 15 May claimed renowned cryptographer Bruce Schneier was “exploiting the Big Brother is Watching You FUD (Fear, Uncertainty and Doubt) phenomenon in order to sell his books, write quite self-promoting essays, give interviews, do consulting etc. and earn his hefty money.”

Meanwhile, Hacking Team has told all of its customers to shut down all uses of its software. They are in “full on emergency mode,” which is perfectly understandable.

EDITED TO ADD: Hacking Team had no exploits for an un-jail-broken iPhone. Seems like the platform of choice if you want to stay secure.

EDITED TO ADD (7/14): WikiLeaks has published a huge trove of e-mails.

Hacking Team had a signed iOS certificate, which has been revoked.

Posted on July 6, 2015 at 12:53 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.