Friday Squid Blogging: Whale Mistaken for Squid

A purported giant squid that washed up on the shore in Norfolk, England, is actually a minke whale.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on March 4, 2016 at 4:51 PM55 Comments

Data Is a Toxic Asset

Thefts of personal information aren't unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it's information that's needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it's stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it's personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that's been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including £60 million (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I'm not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase "big data" refers to the idea that large databases of seemingly random data about people is valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there's no reason not to save as much as possible, and save it all forever. Figuring out what isn't worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it's highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­ -- including who we spend the night with.

Our Internet search data reveals what's important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that's why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it's one of the assets that gets sold.

Saving it is dangerous because it's hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company's profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­ -- in some cases -- ­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company's computers and networks. The data is vulnerable, and the company is vulnerable. It's vulnerable to hackers and governments. It's vulnerable to employee error. And when there's a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we're in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn't necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don't realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there's certainly a lot that companies can do technically to better secure the data they hold about all of us, there's no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there's a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers' real names and credit card numbers. It didn't have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn't charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn't have to store everyone's information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn't be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on CNN.com.

Posted on March 4, 2016 at 5:32 AM33 Comments

DROWN Attack

Earlier this week, we learned of yet another attack against SSL/TLS where an attacker can force people to use insecure algorithms. It's called DROWN. Here's a good news article on the attack, the technical paper describing the attack, and a very good technical blog post by Matthew Green.

As an aside, I am getting pretty annoyed at all the marketing surrounding vulnerabilities these days. Vulnerabilities do not need a catchy name, a dedicated website -- even thought it's a very good website -- and a logo.

Posted on March 3, 2016 at 2:09 PM22 Comments

Security Vulnerabilities in Wireless Keyboards

Many wireless keyboards have a security vulnerability that allow someone to hack the computer using the keyboard-computer link. (Technical details here.)

An attacker can launch the attack from up to 100 meters away. The attacker is able to take control of the target computer, without physically being in front of it, and type arbitrary text or send scripted commands. It is therefore possible to perform rapidly malicious activities without being detected.

The MouseJack exploit centers around injecting unencrypted keystrokes into a target computer. Mouse movements are usually sent unencrypted, and keystrokes are often encrypted (to prevent eavesdropping what is being typed). However the MouseJack vulnerability takes advantage of affected receiver dongles, and their associated software, allowing unencrypted keystrokes transmitted by an attacker to be passed on to the computer's operating system as if the victim had legitimately typed them.

Affected devices are starting to patch. Here's Logitech:

Logitech said that it has developed a firmware update, which is available for download. It is the only one among the affected vendors to respond so for with a patch.

"Logitech's Unifying technology was launched in 2007 and has been used by millions of our consumers since. To our knowledge, we have never been contacted by any consumer with such an issue," Asif Ahsan, Senior Director, Engineering, Logitech. "We have nonetheless taken Bastille Security's work seriously and developed a firmware fix. If any of our customers have concerns, and would like to ensure that this potential vulnerability is eliminated...They should also ensure their Logitech Options software is up to date."

Posted on March 3, 2016 at 6:29 AM22 Comments

The Mathematics of Conspiracy

This interesting study tries to build a mathematical model for the continued secrecy of conspiracies, and tries to predict how long before they will be revealed to the general public, either wittingly or unwittingly.

The equation developed by Dr Grimes, a post-doctoral physicist at Oxford, relied upon three factors: the number of conspirators involved, the amount of time that has passed, and the intrinsic probability of a conspiracy failing.

He then applied his equation to four famous conspiracy theories: The belief that the Moon landing was faked, the belief that climate change is a fraud, the belief that vaccines cause autism, and the belief that pharmaceutical companies have suppressed a cure for cancer.

Dr Grimes's analysis suggests that if these four conspiracies were real, most are very likely to have been revealed as such by now.

Specifically, the Moon landing "hoax" would have been revealed in 3.7 years, the climate change "fraud" in 3.7 to 26.8 years, the vaccine-autism "conspiracy" in 3.2 to 34.8 years, and the cancer "conspiracy" in 3.2 years.

He also ran the model against two actual conspiracies: the NSA's PRISM program and the Tuskegee syphilis experiment.

From the paper:

Abstract: Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants­ -- the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.

Lots on the psychology of conspiracy theories here.

Posted on March 2, 2016 at 12:39 PM52 Comments

Company Tracks Iowa Caucusgoers by their Cell Phones

It's not just governments. Companies like Dstillery are doing this too:

"We watched each of the caucus locations for each party and we collected mobile device ID's," Dstillery CEO Tom Phillips said. "It's a combination of data from the phone and data from other digital devices."

Dstillery found some interesting things about voters. For one, people who loved to grill or work on their lawns overwhelmingly voted for Trump in Iowa, according to Phillips. There was some pretty unexpected characteristics that came up too.

"NASCAR was the one outlier, for Trump and Clinton," Phillips said. "In Clinton's counties, NASCAR way over-indexed."

Kashmir Hill wondered how:

What really happened is that Dstillery gets information from people's phones via ad networks. When you open an app or look at a browser page, there's a very fast auction that happens where different advertisers bid to get to show you an ad. Their bid is based on how valuable they think you are, and to decide that, your phone sends them information about you, including, in many cases, an identifying code (that they've built a profile around) and your location information, down to your latitude and longitude.

Yes, for the vast majority of people, ad networks are doing far more information collection about them than the NSA­ -- but they don't explicitly link it to their names.

So on the night of the Iowa caucus, Dstillery flagged all the auctions that took place on phones in latitudes and longitudes near caucus locations. It wound up spotting 16,000 devices on caucus night, as those people had granted location privileges to the apps or devices that served them ads. It captured those mobile ID's and then looked up the characteristics associated with those IDs in order to make observations about the kind of people that went to Republican caucus locations (young parents) versus Democrat caucus locations. It drilled down farther (e.g., 'people who like NASCAR voted for Trump and Clinton') by looking at which candidate won at a particular caucus location.

Okay, so it didn't collect names. But how much harder could that have been?

Posted on March 2, 2016 at 6:34 AM23 Comments

WikiLeaks Publishes NSA Target List

As part of an ongoing series of classified NSA target list and raw intercepts, WikiLeaks published details of the NSA's spying on UN Secretary General Ban Ki-Moon, German Chancellor Angela Merkel, Israeli prime minister Benjamin Netanyahu, former Italian prime minister Silvio Berlusconi, former French leader Nicolas Sarkozy, and key Japanese and EU trade reps. WikiLeaks never says this, but it's pretty obvious that these documents don't come from Snowden's archive.

I've said this before, but it bears repeating. Spying on foreign leaders is exactly what I expect the NSA to do. It's spying on the rest of the world that I have a problem with.

Other leaks in this series: France, Germany, Brazil, Japan, Italy, the European Union, and the United Nations.

BoingBoing post.

Posted on March 1, 2016 at 12:55 PM33 Comments

Lots More Writing about the FBI vs. Apple

I have written two posts on the case, and at the bottom of those essays are lots of links to other essays written by other people. Here are more links.

If you read just one thing on the technical aspects of this case, read Susan Landau's testimony before the House Judiciary Committee. It's very comprehensive, and very good.

Others are testifying, too.

Apple is fixing the vulnerability. The Justice Department wants Apple to unlock nine more phones.

Apple prevailed in a different iPhone unlocking case.

Why the First Amendment is a bad argument. And why the All Writs Act is the wrong tool.

Dueling poll results: Pew Research reports that 51% side with the FBI, while a Reuters poll reveals that "forty-six percent of respondents said they agreed with Apple's position, 35 percent said they disagreed and 20 percent said they did not know," and that "a majority of Americans do not want the government to have access to their phone and Internet communications, even if it is done in the name of stopping terror attacks."

One of the worst possible outcomes from this story is that people stop installing security updates because they don't trust them. After all, a security update mechanism is also a mechanism by which the government can install a backdoor. Here's one essay that talks about that. Here's another.

Cory Doctorow comments on the FBI's math denialism. Yochai Benkler sees this as a symptom of a greater breakdown in government trust. More good commentary from Jeff Schiller, Julian Sanchez, and Jonathan Zdziarski. Marcy Wheeler's comments. Two posts by Dan Wallach. Michael Chertoff and associates weigh in on the side of security over surveillance.

Here's a Catholic op-ed on Apple's side. Bill Gates sides with the FBI. And a great editorial cartoon.

Here's high snark from Stewart Baker. Baker asks some very good (and very snarky) questions. But the questions are beside the point. This case isn't about Apple or whether Apple is being hypocritical, any more than climate change is about Al Gore's character. This case is about the externalities of what the government is asking for.

One last thing to read.

Okay, one more, on the more general back door issue.

EDITED TO ADD (3/2): Wall Street Journal editorial. And here's video from the House Judiciary Committee hearing. Skip to around 34:50 to get to the actual beginning.

EDITED TO ADD (3/3): Interview with Rep. Darrell Issa. And at the RSA Conference this week, both Defense Secretary Ash Carter and Microsoft's chief legal officer Brad Smith sided with Apple against the FBI.

EDITED TO ADD (3/4): Comments on the case from the UN High Commissioner for Human Rights.

Posted on March 1, 2016 at 6:47 AM80 Comments

Resilient Systems News: IBM to Buy Resilient Systems

Today, IBM announced its intention to purchase my company, Resilient Systems. (Yes, the rumors were basically true.)

I think this is a great development for Resilient Systems and its incident-response platform. (I know, but that's what analysts are calling it.) IBM is an ideal partner for Resilient, and one that I have been quietly hoping would acquire it for over a year now. IBM has a unique combination of security products and services, and an existing organization that will help Resilient immeasurably. It's a good match.

Last year, Resilient integrated with IBM's SIEM -- that's Security Event and Incident Management -- system, QRadar. My guess is that's what attracted IBM to us in the first place. Resilient has the platform that makes QRadar actionable. Conversely, QRadar makes Resilient's platform more powerful. The products are each good separately, but really good together.

And to IBM's credit, it understood that its customers have all sorts of protection and detection security products -- both IBM's and others -- and no single response hub to make sense of it all. This is what Resilient does extremely well, and can now do for IBM's customers globally.

IBM is one of the largest enterprise security companies in the world. That's not obvious; the 6,500-person IBM Security organization gets lost in the 390,000-person company. It has $2 billion in annual sales. It has a great reputation with both customers and analysts. And while Resilient is the industry leader in its field and has a great reputation, large companies like to buy from other large companies. Resilient has repeatedly sold to large enterprise customers, but it always takes some convincing. Being part of IBM makes it a safe choice. IBM also has a sales and service force that will allow Resilient to scale quickly. The company could have done it on its own eventually, but it would have taken many years.

It's a sad reality in tech is that too often -- once, unfortunately, in my personal experience -- acquisitions don't work out for either the acquirer or the acquiree. Deals are made in optimism, but the reality is much less rosy.

I don't think that will happen here. As an acquirer, IBM has a history of effectively integrating the teams and the technologies it acquires. It has bought something like 15 security companies in the past decade -- five in the past two years alone -- and has (more or less) successfully integrated all of them. It carefully selects the companies it buys, spending a lot of time making sure the integration is successful. I was stunned by the amount of work the people from IBM did over the past two months, analyzing every nook and cranny of Resilient in detail: both to verify what they were buying and to figure out how to successfully integrate it.

IBM is going through a lot of reorganizing right now, but security is one of its big bets. It's the fastest-growing vendor in the industry. It hired 1,000 security people in 2015. It needs to continue to grow, and Resilient is now a part of that growth.

Finally, IBM is an East Coast company. This may seem like a trivial point, but Resilient Systems is very much a product of the Boston area. I didn't want Resilient to be a far-flung satellite of a Silicon Valley company. IBM Security is also headquartered in Cambridge, just five T stops away. That's way better than a seven-hour no-legroom bad-food transcontinental flight away.

Random aside: this will be the third company I will have worked for whose name is no longer an acronym for its longer, original, name.

When I joined Resilient Systems just over two years ago, I assumed that it would eventually be purchased by a large and diversified company. Acquisitions in the security space are hot right now, and I have long believed that security will be subsumed by more general IT services. Surveying the field, IBM was always at the top of my list. Resilient had several suitors who expressed interest in purchasing it, as well as many investors who wanted to put money into the company. This was our best option.

We're still working out what I'll be doing at IBM; these months focused more on the company than on me personally. I know they want me to be involved in all of IBM Security. The people I'll be working with know I'll continue to blog and write books. (They also know that my website is way more popular than theirs.) They know I'll continue to talk about politically sensitive topics. They know they won't be able to edit or constrain my writings and speaking. At least, they say they know it; we'll see what actually happens. But I'm optimistic. There are other IBM people whose public writings do not represent the views of IBM -- so there's precedent.

All in all, this is great news for Resilient Systems and -- I hope -- great news for IBM. We're still exhibiting at the RSA Conference. I'm still serving a curated cocktail at the booth (#1727, South Hall) on Tuesday from 4:00-6:00. We're still giving away signed copies of Data and Goliath. I'm not sure what sort of new signage we'll have. No one liked my idea of a large spray-painted "Under New Management" sign nailed to the side of the booth, but I'm still lobbying for that.

Posted on February 29, 2016 at 11:08 AM49 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.