March 15, 2016

by Bruce Schneier
CTO, Resilient Systems, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

Data Is a Toxic Asset

Thefts of personal information aren't unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it's information that's needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it's stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it's personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that's been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including 60 million pounds (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I'm not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase "big data" refers to the idea that large databases of seemingly random data about people are valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there's no reason not to save as much as possible, and save it all forever. Figuring out what isn't worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it's highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with -- including who we spend the night with.

Our Internet search data reveals what's important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that's why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it's one of the assets that gets sold.

Saving it is dangerous because it's hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company's profits, reduce its market share, hurt its stock price, cause it public embarrassment, and -- in some cases -- result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company's computers and networks. The data is vulnerable, and the company is vulnerable. It's vulnerable to hackers and governments. It's vulnerable to employee error. And when there's a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we're in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn't necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don't realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there's certainly a lot that companies can do technically to better secure the data they hold about all of us, there's no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there's a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers' real names and credit card numbers. It didn't have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn't charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn't have to store everyone's information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn't be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on

Experian breach:

IRS breach:

TalkTalk breach:

Big data breaches of 2015:

NSA tactics:

Data sold as an asset:

Anthem Health breach:

Target Corporation breach:

The FBI vs Apple: Decrypting an iPhone

Last month, a federal magistrate ordered Apple to assist the FBI in hacking into the iPhone used by one of the San Bernardino shooters. Apple will fight this order in court.

The policy implications are complicated. The FBI wants to set a precedent that tech companies will assist law enforcement in breaking their users' security, and the technology community is afraid that the precedent will limit what sorts of security features it can offer customers. The FBI sees this as a privacy vs. security debate, while the tech community sees it as a security vs. surveillance debate.

The technology considerations are more straightforward, and shine a light on the policy questions.

The iPhone 5c in question is encrypted. This means that someone without the key cannot get at the data. This is a good security feature. Your phone is a very intimate device. It is likely that you use it for private text conversations, and that it's connected to your bank accounts. Location data reveals where you've been, and correlating multiple phones reveal who you associate with. Encryption protects your phone if it's stolen by criminals. Encryption protects the phones of dissidents around the world if they're taken by local police. It protects all the data on your phone, and the apps that increasingly control the world around you.

This encryption depends on the user choosing a secure password, of course. If you had an older iPhone, you probably just used the default four-digit password. That's only 10,000 possible passwords, making it pretty easy to guess. If the user enabled the more-secure alphanumeric password, that means a harder-to-guess password.

Apple added two more security features on the iPhone. First, a phone could be configured to erase the data after too many incorrect password guesses. And it enforced a delay between password guesses. This delay isn't really noticeable by the user if you type the wrong password and then have to retype the correct password, but it's a large barrier for anyone trying to guess password after password in a brute-force attempt to break into the phone.

But that iPhone has a security flaw. While the data is encrypted, the software controlling the phone is not. This means that someone can create a hacked version of the software and install it on the phone without the consent of the phone's owner and without knowing the encryption key. This is what the FBI -- and now the court -- is demanding Apple do: It wants Apple to rewrite the phone's software to make it possible to guess possible passwords quickly and automatically.

The FBI's demands are specific to one phone, which might make its request seem reasonable if you don't consider the technological implications: Authorities have the phone in their lawful possession, and they only need help seeing what's on it in case it can tell them something about how the San Bernardino shooters operated. But the hacked software the court and the FBI wants Apple to provide would be general. It would work on any phone of the same model. It has to.

Make no mistake; this is what a backdoor looks like. This is an existing vulnerability in iPhone security that could be exploited by anyone.

There's nothing preventing the FBI from writing that hacked software itself, aside from budget and manpower issues. There's every reason to believe, in fact, that such hacked software has been written by intelligence organizations around the world. Have the Chinese, for instance, written a hacked Apple operating system that records conversations and automatically forwards them to police? They would need to have stolen Apple's code-signing key so that the phone would recognize the hacked as valid, but governments have done that in the past with other keys and other companies. We simply have no idea who already has this capability.

And while this sort of attack might be limited to state actors today, remember that attacks always get easier. Technology broadly spreads capabilities, and what was hard yesterday becomes easy tomorrow. Today's top-secret NSA programs become tomorrow's PhD theses and the next day's hacker tools. Soon this flaw will be exploitable by cybercriminals to steal your financial data. Everyone with an iPhone is at risk, regardless of what the FBI demands Apple do

What the FBI wants to do would make us less secure, even though it's in the name of keeping us safe from harm. Powerful governments, democratic and totalitarian alike, want access to user data for both law enforcement and social control. We cannot build a backdoor that only works for a particular type of government, or only in the presence of a particular court order.

Either everyone gets security or no one does. Either everyone gets access or no one does. The current case is about a single iPhone 5c, but the precedent it sets will apply to all smartphones, computers, cars and everything the Internet of Things promises. The danger is that the court's demands will pave the way to the FBI forcing Apple and others to reduce the security levels of their smart phones and computers, as well as the security of cars, medical devices, homes, and everything else that will soon be computerized. The FBI may be targeting the iPhone of the San Bernardino shooter, but its actions imperil us all.

This essay previously appeared in the Washington Post.

Lots of News and Essays about the FBI vs. Apple

This isn't the most comprehensive list of links, but it's a good one. They're more or less in chronological order.

Initial court order:

Apple's refusal:

Apple's statement to its customers:

Congressman Ted Lieu comments:

An essay about why Tim Cook and Apple are such champions of privacy.

Initial analysis of Apple's case.

Interesting debate on the case:

Nicholas Weaver's comments.

Commentary from another planet:

Ben Adida comments:

Julian Sanchez comments:

New York Times editorial:

Apple's assistance in the past:

What it means for Apple to build a tool:

A good technical explainer:

Tim Cook as a privacy advocate:

How the backdoor works on modern iPhones:

EFF on why you should care:

The grugq on what this all means.

How to set a longer iPhone password and thwart this kind of attack.

Comey on the issue.

And a secret memo describes the FBI's broader strategy to weaken security.

Orin Kerr's thoughts:

Tom Cook's letter to his employees, and an FAQ.

How CALEA relates to all this.

Here's what's not available in the iCloud backup.

The FBI told the county to change the password on the phone -- that's why they can't get in.
What the FBI needs is technical expertise, not backdoors.

And it's not just this iPhone; the FBI wants Apple to break into lots of them.

What China asks of tech companies -- not that this is a country we should particularly want to model.

Former NSA Director Michael Hayden on the case.

There is a quite a bit of detail about the Apple efforts to assist the FBI in the legal motion the Department of Justice filed.

Two good essays.

Jennifer Granick's comments.

In my essay, I talk about other countries developing this capability with Apple's knowledge or consent. Making it work requires stealing a copy of Apple's code-signing key, something that has been done by the authors of Stuxnet (probably the US) and Flame (probably Russia) in the past.

If you read just one thing on the technical aspects of this case, read Susan Landau's testimony before the House Judiciary Committee. It's very comprehensive, and very good.

Others testified, too.

Apple is fixing the vulnerability.

The Justice Department wants Apple to unlock nine more phones.

Apple prevailed in a different iPhone unlocking case.

Why the First Amendment is a bad argument.

Why the All Writs Act is the wrong tool.

Dueling poll results: Pew Research reports that 51% side with the FBI, while a Reuters poll reveals that "forty-six percent of respondents said they agreed with Apple's position, 35 percent said they disagreed and 20 percent said they did not know," and that "a majority of Americans do not want the government to have access to their phone and Internet communications, even if it is done in the name of stopping terror attacks."

One of the worst possible outcomes from this story is that people stop installing security updates because they don't trust them. After all, a security update mechanism is also a mechanism by which the government can install a backdoor. Here are two essays that talk about that.

Cory Doctorow comments on the FBI's math denialism.

Yochai Benkler sees this as a symptom of a greater breakdown in government trust.

Good commentary from Jeff Schiller

Good commentary from Julian Sanchez

Good commentary from Jonathan Zdziarski.

Marcy Wheeler's comments.

Two posts by Dan Wallach.

Michael Chertoff and associates weigh in on the side of security over surveillance.

Here's a Catholic op-ed on Apple's side.

Bill Gates sides with the FBI.

And a great editorial cartoon.

Here's high snark from Stewart Baker. Baker asks some very good (and very snarky) questions. But the questions are beside the point. This case isn't about Apple or whether Apple is being hypocritical, any more than climate change is about Al Gore's character. This case is about the externalities of what the government is asking for.

On the ramifications of the case:

On the more general backdoor issue.

Wall Street Journal editorial.

And here's video from the House Judiciary Committee hearing. Skip to around 34:50 to get to the actual beginning.

Interview with Rep. Darrell Issa.

And at the RSA Conference last month, both Defense Secretary Ash Carter and Microsoft's chief legal officer Brad Smith sided with Apple against the FBI.

Comments on the case from the UN High Commissioner for Human Rights.

Op-ed by Apple.

And an interesting article on the divide in the Obama Administration.

Another good essay.

President Obama's comments on encryption: he wants backdoors.
Cory Doctorow reports.

The Importance of Strong Encryption to Security

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop -- and I hope you do -- it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It's a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn't.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited -- by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or -- even worse -- nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a "backdoor," because it's a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can't build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn't work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it's a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That's just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you'll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It's not. It's a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can't. If the FBI gets its way and forces companies to weaken encryption, all of us -- our data, our networks, our infrastructure, our society -- will be at risk.

This essay previously appeared in the New York Times "Room for Debate" blog.
It's something I seem to need to say again and again.

When bad security can destroy life and property:

Any backdoor weakens security:

The debate is between more security and less security:

National security needs strong encryption:


More psychological research on our reaction to terrorism and mass violence:
This is related:

Both Dutch and UK police are training eagles to attack drones.

Trust is a complex social phenomenon, captured very poorly by the binary nature of Internet trust systems. This paper proposes a social consensus system of trust: "Do You Believe in Tinker Bell? The Social Externalities of Trust," by Khaled Baqer and Ross Anderson.
Blog post on the paper:

Teenage hacker is behind attacks against US government:

Bad security vulnerability in glibc DNS:

Four researchers have demonstrated a TEMPEST attack against a laptop, recovering its keys by listening to its electrical emanations. The cost for the attack hardware was about $3,000. For half a century this has been a nation-state-level espionage technique. The cost is continually falling.

For the past couple of months, Forbes has been blocking browsers with ad blockers. Recently, I tried to access a Wired article and the site blocked me for the same reason. I see this as another battle in this continuing arms race, and hope/expect that the ad blockers will update themselves to fool the ad blocker detectors. But in a fine example of irony, the Forbes site has been serving malware in its ads.
And it seems that Forbes is inconsistently using its ad blocker blocker. At least, I was able to get to that linked article last week. But then I couldn't get to another article a few days later.

Interesting research on balancing privacy with surveillance: Michael Kearns, Aaron Roth, Zhiwei Steven Wu, and Grigory Yaroslavtsev, "Private algorithms for the protected in social network search":

Brian Krebs has a really weird story about the built-in eavesdropping by the Chinese-made Foscam security camera.

Law Professor Karen Levy writes about the rise of surveillance in our most intimate activities -- love, sex, romance -- and how it affects those activities.

New research by Rebecca Lipman, "Online Privacy and the Invisible Market for Our Data." The paper argues that notice and consent doesn't work, and suggests how it could be made to work.

This is more on the "data as exhaust" metaphor. It's a research paper: Gavin J.D. Smith, "Surveillance, Data and Embodiment: On the Work of Being Watched.'

The company Dstillery tracked Iowa caucusgoers by their cell phones:

This interesting study tries to build a mathematical model for the continued secrecy of conspiracies, and tries to predict how long before they will be revealed to the general public, either wittingly or unwittingly.
This essay debunks the above research.
Lots on the psychology of conspiracy theories here:

Many wireless keyboards have a security vulnerability that allow someone to hack the computer using the keyboard-computer link.

Earlier this week, we learned of yet another attack against SSL/TLS where an attacker can force people to use insecure algorithms. It's called DROWN. Here's a good news article on the attack, the technical paper describing the attack, and a very good technical blog post by Matthew Green.
As an aside, I am getting pretty annoyed at all the marketing surrounding vulnerabilities these days. Vulnerabilities don't need a catchy name, a dedicated website -- even though it's a very good website -- and a logo.

This is the first time I've heard of this clever hack. Bicycle thieves saw through a bicycle rack and then tape it back together, so unsuspecting people chain their bikes to them.

New paper: "The Economics of Privacy, by Alessandro Acquisti, Curtis R. Taylor, and Liad Wagman.

A group of pirates -- the real kind -- determined which cargo to steal by hacking into a shipping company's database.

This is an excellent article on the December hack of Ukraine's power grid.

Plagiarism in crossword puzzles discovered through big-data analysis:

New credit card skimmers are hidden inside the card readers, making them impossible to spot.

A Citizen Lab research study of Chinese attack and espionage tactics against Tibetan networks and users.

Looks like tens of thousands of ISIS documents have been leaked. Where did they come from? We don't know.

This analysis of Yemeni cell phone metadata shows how powerful a surveillance tool it is:

Interesting research: "Third-party punishment as a costly signal of trustworthiness":

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can't decrypt. This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability -- one that they would be forced to push onto the user's device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.
And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.
We live in scary times when our governments want us to reduce our own security.

Ross Anderson liveblogged the 2016 Financial Cryptography conference:

Security Implications of Cash

I saw two related stories last month. The first is about high-denomination currency. The EU is considering dropping its 500-euro note, on the grounds that only criminals need to move around that much cash. In response, Switzerland said that it is not dropping its 1,000-Swiss franc note. Of course, the US leads the way in small money here; its biggest banknote is $100.

This probably matters. Moving and laundering cash is at least as big a logistical and legal problem as moving and selling drugs. On the other hand, countries make a profit from their cash in circulation: it's called seigniorage.

The second story is about the risks associated with legal marijuana dispensaries in the US not being able to write checks, have a bank account, and so on. There's the physical risk of theft and violence, and the logistical nightmare of having to pay a $100K tax bill with marijuana-smelling paper currency.

The first story:

The second story:

WikiLeaks Publishes NSA Target List

As part of an ongoing series of classified NSA target list and raw intercepts, WikiLeaks published details of the NSA's spying on UN Secretary General Ban Ki-Moon, German Chancellor Angela Merkel, Israeli prime minister Benjamin Netanyahu, former Italian prime minister Silvio Berlusconi, former French leader Nicolas Sarkozy, and key Japanese and EU trade reps. WikiLeaks never says this, but it's pretty obvious that these documents don't come from Snowden's archive.

I've said this before, but it bears repeating. Spying on foreign leaders is exactly what I expect the NSA to do. It's spying on the rest of the world that I have a problem with.

Other leaks in this series include France, Germany, Brazil, Japan, Italy, the European Union, and the United Nations.

Blog entry URL:

New WikiLeaks NSA cables:

WikiLeaks on the NSA and France:

WikiLeaks on the NSA and Germany:

WikiLeaks on the NSA and Brazil:

WikiLeaks on the NSA and Japan:

WikiLeaks on the NSA and Italy:

WikiLeaks on the NSA and the European Union:

WikiLeaks on the NSA and the United Nations:

BoingBoing post:

Schneier News

I'm speaking at the University of New England in Portland, Maine, on March 23, 2016.

I'm speaking at the Crypto Summit in San Francisco on March 30, 2016.

I'm speaking at RightsCon in San Francisco on March 31, 2016.

Here's a video of my talk at the University of Ottawa on "Security and Privacy in the World-Sized Web".

Here's an article about that same talk that the RSA Conference:

Here's a Q&A with me:

Here's another interview with me, mostly about the acquisition:

Resilient Systems News: IBM to Buy Resilient Systems

A couple of weeks ago, IBM announced its intention to purchase my company, Resilient Systems. (Yes, the rumors were basically true.)

I think this is a great development for Resilient Systems and its incident-response platform. (I know, but that's what analysts are calling it.) IBM is an ideal partner for Resilient, and one that I have been quietly hoping would acquire it for over a year now. IBM has a unique combination of security products and services, and an existing organization that will help Resilient immeasurably. It's a good match.

Last year, Resilient integrated with IBM's SIEM -- that's Security Event and Incident Management -- system, QRadar. My guess is that's what attracted IBM to us in the first place. Resilient has the platform that makes QRadar actionable. Conversely, QRadar makes Resilient's platform more powerful. The products are each good separately, but really good together.

And to IBM's credit, it understood that its customers have all sorts of protection and detection security products -- both IBM's and others -- and no single response hub to make sense of it all. This is what Resilient does extremely well, and can now do for IBM's customers globally.

IBM is one of the largest enterprise security companies in the world. That's not obvious; the 6,500-person IBM Security organization gets lost in the 390,000-person company. It has $2 billion in annual sales. It has a great reputation with both customers and analysts. And while Resilient is the industry leader in its field and has a great reputation, large companies like to buy from other large companies. Resilient has repeatedly sold to large enterprise customers, but it always takes some convincing. Being part of IBM makes it a safe choice. IBM also has a sales and service force that will allow Resilient to scale quickly. The company could have done it on its own eventually, but it would have taken many years.

It's a sad reality in tech is that too often -- once, unfortunately, in my personal experience -- acquisitions don't work out for either the acquirer or the acquiree. Deals are made in optimism, but the reality is much less rosy.

I don't think that will happen here. As an acquirer, IBM has a history of effectively integrating the teams and the technologies it acquires. It has bought something like 15 security companies in the past decade -- five in the past two years alone -- and has (more or less) successfully integrated all of them. It carefully selects the companies it buys, spending a lot of time making sure the integration is successful. I was stunned by the amount of work the people from IBM did over the past two months, analyzing every nook and cranny of Resilient in detail: both to verify what they were buying and to figure out how to successfully integrate it.

IBM is going through a lot of reorganizing right now, but security is one of its big bets. It's the fastest-growing vendor in the industry. It hired 1,000 security people in 2015. It needs to continue to grow, and Resilient is now a part of that growth.

Finally, IBM is an East Coast company. This may seem like a trivial point, but Resilient Systems is very much a product of the Boston area. I didn't want Resilient to be a far-flung satellite of a Silicon Valley company. IBM Security is also headquartered in Cambridge, just five T stops away. That's way better than a seven-hour no-legroom bad-food transcontinental flight away.

Random aside: this will be the third company I will have worked for whose name is no longer an acronym for its longer, original, name.

When I joined Resilient Systems just over two years ago, I assumed that it would eventually be purchased by a large and diversified company. Acquisitions in the security space are hot right now, and I have long believed that security will be subsumed by more general IT services. Surveying the field, IBM was always at the top of my list. Resilient had several suitors who expressed interest in purchasing it, as well as many investors who wanted to put money into the company. This was our best option.

We're still working out what I'll be doing at IBM; these months focused more on the company than on me personally. I know they want me to be involved in all of IBM Security. The people I'll be working with know I'll continue to blog and write books. (They also know that my website is way more popular than theirs.) They know I'll continue to talk about politically sensitive topics. They know they won't be able to edit or constrain my writings and speaking. At least, they say they know it; we'll see what actually happens. But I'm optimistic. There are other IBM people whose public writings do not represent the views of IBM -- so there's precedent.

All in all, this is great news for Resilient Systems and -- I hope -- great news for IBM. We still exhibited at the RSA Conference. I still served a curated cocktail at the booth (#1727, South Hall) on Tuesday from 4:00-6:00. We still gave away signed copies of "Data and Goliath." But no one liked my idea of a large spray-painted "Under New Management" sign nailed to the side of the booth.

Press release:

Resilient Systems:

The rumors from the previous week:

An "incident response platform":

IBM Security:

Security is hot right now:

My essay on how security will be subsumed:

IBM's blog::

Resilient at the RSA Conference:

Cheating at Professional Bridge

Interesting article on detecting cheaters in professional bridge using big-data analysis.

Basically, a big part of the game is the communication of information between the partners. But only certain communications channels are permitted. Cheating involves partners sending secret signals to each other.

The results of this can be detected by analyzing lots of games the partners play. If they consistently make plays that should turn out badly based on the information they should know, but end up turning out well given the actual distribution of the cards, then we know that some sort of secret signaling is involved.

Simultaneous Discovery of Vulnerabilities

In the conversation about zero-day vulnerabilities and whether "good" governments should disclose or hoard vulnerabilities, one of the critical variables is independent discovery. That is, if it is unlikely that someone else will independently discover an NSA-discovered vulnerability -- the NSA calls this "NOBUS," for "nobody but us" -- then it is not unreasonable for the NSA to keep that vulnerability secret and use it for attack. If, on the other hand, it is likely that someone else will discover and use it, then they should probably disclose it to the vendor and get it patched.

The likelihood partly depends on whether vulnerabilities are sparse or dense. But that assumes that vulnerability discovery is random. And there's a lot of evidence that it's not.

For example, there's a new GNU C vulnerability that lay dormant for years and was independently discovered by multiple researchers, all around the same time.

It remains unclear why or how glibc maintainers allowed a bug of this magnitude to be introduced into their code, remain undiscovered for seven years, and then go unfixed for seven months following its report. By Google's account, the bug was independently uncovered by at least two and possibly three separate groups who all worked to have it fixed. It wouldn't be surprising if over the years the vulnerability was uncovered by additional people and possibly exploited against unsuspecting targets.

Similarly, Heartbleed lay dormant for years before it was independently discovered by both Codenomicon and Google.

This is not uncommon. It's almost like there's something in the air that makes a particular vulnerability shallow and easy to discover. This implies that NOBUS is not a useful concept.


The new GNU C vulnerability:


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 13 books -- including his latest, "Data and Goliath" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2016 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.