Blog: February 2016 Archives

Resilient Systems News: IBM to Buy Resilient Systems

Today, IBM announced its intention to purchase my company, Resilient Systems. (Yes, the rumors were basically true.)

I think this is a great development for Resilient Systems and its incident-response platform. (I know, but that’s what analysts are calling it.) IBM is an ideal partner for Resilient, and one that I have been quietly hoping would acquire it for over a year now. IBM has a unique combination of security products and services, and an existing organization that will help Resilient immeasurably. It’s a good match.

Last year, Resilient integrated with IBM’s SIEM—that’s Security Event and Incident Management—system, QRadar. My guess is that’s what attracted IBM to us in the first place. Resilient has the platform that makes QRadar actionable. Conversely, QRadar makes Resilient’s platform more powerful. The products are each good separately, but really good together.

And to IBM’s credit, it understood that its customers have all sorts of protection and detection security products—both IBM’s and others—and no single response hub to make sense of it all. This is what Resilient does extremely well, and can now do for IBM’s customers globally.

IBM is one of the largest enterprise security companies in the world. That’s not obvious; the 6,500-person IBM Security organization gets lost in the 390,000-person company. It has $2 billion in annual sales. It has a great reputation with both customers and analysts. And while Resilient is the industry leader in its field and has a great reputation, large companies like to buy from other large companies. Resilient has repeatedly sold to large enterprise customers, but it always takes some convincing. Being part of IBM makes it a safe choice. IBM also has a sales and service force that will allow Resilient to scale quickly. The company could have done it on its own eventually, but it would have taken many years.

It’s a sad reality in tech is that too often—once, unfortunately, in my personal experience—acquisitions don’t work out for either the acquirer or the acquiree. Deals are made in optimism, but the reality is much less rosy.

I don’t think that will happen here. As an acquirer, IBM has a history of effectively integrating the teams and the technologies it acquires. It has bought something like 15 security companies in the past decade—five in the past two years alone—and has (more or less) successfully integrated all of them. It carefully selects the companies it buys, spending a lot of time making sure the integration is successful. I was stunned by the amount of work the people from IBM did over the past two months, analyzing every nook and cranny of Resilient in detail: both to verify what they were buying and to figure out how to successfully integrate it.

IBM is going through a lot of reorganizing right now, but security is one of its big bets. It’s the fastest-growing vendor in the industry. It hired 1,000 security people in 2015. It needs to continue to grow, and Resilient is now a part of that growth.

Finally, IBM is an East Coast company. This may seem like a trivial point, but Resilient Systems is very much a product of the Boston area. I didn’t want Resilient to be a far-flung satellite of a Silicon Valley company. IBM Security is also headquartered in Cambridge, just five T stops away. That’s way better than a seven-hour no-legroom bad-food transcontinental flight away.

Random aside: this will be the third company I will have worked for whose name is no longer an acronym for its longer, original, name.

When I joined Resilient Systems just over two years ago, I assumed that it would eventually be purchased by a large and diversified company. Acquisitions in the security space are hot right now, and I have long believed that security will be subsumed by more general IT services. Surveying the field, IBM was always at the top of my list. Resilient had several suitors who expressed interest in purchasing it, as well as many investors who wanted to put money into the company. This was our best option.

We’re still working out what I’ll be doing at IBM; these months focused more on the company than on me personally. I know they want me to be involved in all of IBM Security. The people I’ll be working with know I’ll continue to blog and write books. (They also know that my website is way more popular than theirs.) They know I’ll continue to talk about politically sensitive topics. They know they won’t be able to edit or constrain my writings and speaking. At least, they say they know it; we’ll see what actually happens. But I’m optimistic. There are other IBM people whose public writings do not represent the views of IBM—so there’s precedent.

All in all, this is great news for Resilient Systems and—I hope—great news for IBM. We’re still exhibiting at the RSA Conference. I’m still serving a curated cocktail at the booth (#1727, South Hall) on Tuesday from 4:00-6:00. We’re still giving away signed copies of Data and Goliath. I’m not sure what sort of new signage we’ll have. No one liked my idea of a large spray-painted “Under New Management” sign nailed to the side of the booth, but I’m still lobbying for that.

EDITED TO ADD (3/17): This is how IBM is positioning us, at least initially.

Posted on February 29, 2016 at 11:08 AM50 Comments

More on the "Data as Exhaust" Metaphor

Research paper: Gavin J.D. Smith, “Surveillance, Data and Embodiment: On the Work of Being Watched,” Body and Society, January 2016.

Abstract: Today’s bodies are akin to ‘walking sensor platforms’. Bodies either host, or are the subjects of, an array of sensing devices that act to convert bodily movements, actions and dynamics into circulative data. This article proposes the notions of ‘disembodied exhaust’ and ’embodied exhaustion’ to conceptualise processes of bodily sensorisation and datafication. As the material body interfaces with networked sensor technologies and sensing infrastructures, it emits disembodied exhaust: gaseous flows of personal information that establish a representational data-proxy. It is this networked actant that progressively structures how embodied subjects experience their daily lives. The significance of this symbiont medium in determining the outcome of interplays between networked individuals and audiences necessitates that it is carefully contrived. The article explores the nature and function of the data-proxy, and its impact on social relations. Drawing on examples that depict individuals engaging with their data-proxies, the article suggests that managing a virtual presence is analogous to a work relation, demanding diligence and investment. But it also shows how the data-proxy operates as a mode of affect that challenges conventional distinctions made between organic and inorganic bodies, agency and actancy, mortality and immortality, presence and absence.

Posted on February 29, 2016 at 6:17 AM6 Comments

Notice and Consent

New Research: Rebecca Lipman, “Online Privacy and the Invisible Market for Our Data.” The paper argues that notice and consent doesn’t work, and suggests how it could be made to work.

Abstract: Consumers constantly enter into blind bargains online. We trade our personal information for free websites and apps, without knowing exactly what will be done with our data. There is nominally a notice and choice regime in place via lengthy privacy policies. However, virtually no one reads them. In this ill-informed environment, companies can gather and exploit as much data as technologically possible, with very few legal boundaries. The consequences for consumers are often far-removed from their actions, or entirely invisible to them. Americans deserve a rigorous notice and choice regime. Such a regime would allow consumers to make informed decisions and regain some measure of control over their personal information. This article explores the problems with the current marketplace for our digital data, and explains how we can make a robust notice and choice regime work for consumers.

Posted on February 26, 2016 at 12:22 PM12 Comments

Thinking about Intimate Surveillance

Law Professor Karen Levy writes about the rise of surveillance in our most intimate activities—love, sex, romance—and how it affects those activities.

This article examines the rise of the surveillant paradigm within some of our most intimate relationships and behaviors—those relating to love, romance, and sexual activity—and considers what challenges this sort of data collection raises for privacy and the foundations of intimate life.

Data-gathering about intimate behavior was, not long ago, more commonly the purview of state public health authorities, which have routinely gathered personally identifiable information in the course of their efforts to (among other things) fight infectious disease. But new technical capabilities, social norms, and cultural frameworks are beginning to change the nature of intimate monitoring practices. Intimate surveillance is emerging and becoming normalized as primarily an interpersonal phenomenon, one in which all sorts of people engage, for all sorts of reasons. The goal is not top-down management of populations, but establishing knowledge about (and, ostensibly, concomitant control over) one’s own intimate relations and activities.

After briefly describing some scope conditions on this inquiry, I survey several types of monitoring technologies used across the “life course” of an intimate relationship—from dating to sex and romance, from fertility to fidelity, to abuse. I then examine the relationship between data collection, values, and privacy, and close with a few words about the uncertain role of law and policy in the sphere of intimate surveillance.

Posted on February 26, 2016 at 7:33 AM9 Comments

Simultaneous Discovery of Vulnerabilities

In the conversation about zero-day vulnerabilities and whether “good” governments should disclose or hoard vulnerabilities, one of the critical variables is independent discovery. That is, if it is unlikely that someone else will independently discover an NSA-discovered vulnerability—the NSA calls this “NOBUS,” for “nobody but us”—then it is not unreasonable for the NSA to keep that vulnerability secret and use it for attack. If, on the other hand, it is likely that someone else will discover and use it, then they should probably disclose it to the vendor and get it patched.

The likelihood partly depends on whether vulnerabilities are sparse or dense. But that assumes that vulnerability discovery is random. And there’s a lot of evidence that it’s not.

For example, there’s a new new GNU C vulnerability that lay dormant for years and was independently discovered by multiple researchers, all around the same time.

It remains unclear why or how glibc maintainers allowed a bug of this magnitude to be introduced into their code, remain undiscovered for seven years, and then go unfixed for seven months following its report. By Google’s account, the bug was independently uncovered by at least two and possibly three separate groups who all worked to have it fixed. It wouldn’t be surprising if over the years the vulnerability was uncovered by additional people and possibly exploited against unsuspecting targets.

Similarly, Heartbleed lay dormant for years before it was independently discovered by both Codenomicon and Google.

This is not uncommon. It’s almost like there’s something in the air that makes a particular vulnerability shallow and easy to discover. This implies that NOBUS is not a useful concept.

Posted on February 25, 2016 at 1:14 PM29 Comments

The Importance of Strong Encryption to Security

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop—and I hope you do—it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It’s a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn’t.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited—by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or—even worse—nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a “backdoor,” because it’s a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can’t build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn’t work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it’s a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That’s just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you’ll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It’s not. It’s a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can’t. If the FBI gets its way and forces companies to weaken encryption, all of us—our data, our networks, our infrastructure, our society—will be at risk.

This essay previously appeared in the New York Times “Room for Debate” blog. It’s something I seem to need to say again and again.

Posted on February 25, 2016 at 6:40 AM55 Comments

Eavesdropping by the Foscam Security Camera

Brian Krebs has a really weird story about the built-in eavesdropping by the Chinese-made Foscam security camera:

Imagine buying an internet-enabled surveillance camera, network attached storage device, or home automation gizmo, only to find that it secretly and constantly phones home to a vast peer-to-peer (P2P) network run by the Chinese manufacturer of the hardware. Now imagine that the geek gear you bought doesn’t actually let you block this P2P communication without some serious networking expertise or hardware surgery that few users would attempt.

Posted on February 24, 2016 at 12:05 PM31 Comments

Research on Balancing Privacy with Surveillance

Interesting research: Michael Kearns, Aaron Roth, Zhiwei Steven Wu, and Grigory Yaroslavtsev, “Private algorithms for the protected in social network search,” PNAS, Jan 2016:

Abstract: Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.

Posted on February 24, 2016 at 6:05 AM21 Comments

The Ads vs. Ad Blockers Arms Race

For the past month or so, Forbes has been blocking browsers with ad blockers. Today, I tried to access a Wired article and the site blocked me for the same reason.

I see this as another battle in this continuing arms race, and hope/expect that the ad blockers will update themselves to fool the ad blocker detectors.

But in a fine example of irony, the Forbes site has been serving malware in its ads.

And it seems that Forbes is inconsistently using its ad blocker blocker. At least, I was able to get to that linked article last week. But then I couldn’t get to another article a few days later.

Posted on February 23, 2016 at 12:18 PM73 Comments

Practical TEMPEST Attack

Four researchers have demonstrated a TEMPEST attack against a laptop, recovering its keys by listening to its electrical emanations. The cost for the attack hardware was about $3,000.

News article:

To test the hack, the researchers first sent the target a specific ciphertext—­in other words, an encrypted message.

“During the decryption of the chosen ciphertext, we measure the EM leakage of the target laptop, focusing on a narrow frequency band,” the paper reads. The signal is then processed, and “a clean trace is produced which reveals information about the operands used in the elliptic curve cryptography,” it continues, which in turn “is used in order to reveal the secret key.”

The equipment used included an antenna, amplifiers, a software-defined radio, and a laptop. This process was being carried out through a 15cm thick wall, reinforced with metal studs, according to the paper.

The researchers obtained the secret key after observing 66 decryption processes, each lasting around 0.05 seconds. “This yields a total measurement time of about 3.3 sec,” the paper reads. It’s important to note that when the researchers say that the secret key was obtained in “seconds,” that’s the total measurement time, and not necessarily how long it would take for the attack to actually be carried out. A real world attacker would still need to factor in other things, such as the target reliably decrypting the sent ciphertext, because observing that process is naturally required for the attack to be successful.

For half a century this has been a nation-state-level espionage technique. The cost is continually falling.

Posted on February 23, 2016 at 5:49 AM33 Comments

Decrypting an iPhone for the FBI

Earlier this week, a federal magistrate ordered Apple to assist the FBI in hacking into the iPhone used by one of the San Bernardino shooters. Apple will fight this order in court.

The policy implications are complicated. The FBI wants to set a precedent that tech companies will assist law enforcement in breaking their users’ security, and the technology community is afraid that the precedent will limit what sorts of security features it can offer customers. The FBI sees this as a privacy vs. security debate, while the tech community sees it as a security vs. surveillance debate.

The technology considerations are more straightforward, and shine a light on the policy questions.

The iPhone 5c in question is encrypted. This means that someone without the key cannot get at the data. This is a good security feature. Your phone is a very intimate device. It is likely that you use it for private text conversations, and that it’s connected to your bank accounts. Location data reveals where you’ve been, and correlating multiple phones reveals who you associate with. Encryption protects your phone if it’s stolen by criminals. Encryption protects the phones of dissidents around the world if they’re taken by local police. It protects all the data on your phone, and the apps that increasingly control the world around you.

This encryption depends on the user choosing a secure password, of course. If you had an older iPhone, you probably just used the default four-digit password. That’s only 10,000 possible passwords, making it pretty easy to guess. If the user enabled the more-secure alphanumeric password, that means a harder-to-guess password.

Apple added two more security features on the iPhone. First, a phone could be configured to erase the data after too many incorrect password guesses. And it enforced a delay between password guesses. This delay isn’t really noticeable by the user if you type the wrong password and then have to retype the correct password, but it’s a large barrier for anyone trying to guess password after password in a brute-force attempt to break into the phone.

But that iPhone has a security flaw. While the data is encrypted, the software controlling the phone is not. This means that someone can create a hacked version of the software and install it on the phone without the consent of the phone’s owner and without knowing the encryption key. This is what the FBI ­ and now the court ­ is demanding Apple do: It wants Apple to rewrite the phone’s software to make it possible to guess possible passwords quickly and automatically.

The FBI’s demands are specific to one phone, which might make its request seem reasonable if you don’t consider the technological implications: Authorities have the phone in their lawful possession, and they only need help seeing what’s on it in case it can tell them something about how the San Bernardino shooters operated. But the hacked software the court and the FBI wants Apple to provide would be general. It would work on any phone of the same model. It has to.

Make no mistake; this is what a backdoor looks like. This is an existing vulnerability in iPhone security that could be exploited by anyone.

There’s nothing preventing the FBI from writing that hacked software itself, aside from budget and manpower issues. There’s every reason to believe, in fact, that such hacked software has been written by intelligence organizations around the world. Have the Chinese, for instance, written a hacked Apple operating system that records conversations and automatically forwards them to police? They would need to have stolen Apple’s code-signing key so that the phone would recognize the hacked as valid, but governments have done that in the past with other keys and other companies. We simply have no idea who already has this capability.

And while this sort of attack might be limited to state actors today, remember that attacks always get easier. Technology broadly spreads capabilities, and what was hard yesterday becomes easy tomorrow. Today’s top-secret NSA programs become tomorrow’s PhD theses and the next day’s hacker tools. Soon this flaw will be exploitable by cybercriminals to steal your financial data. Everyone with an iPhone is at risk, regardless of what the FBI demands Apple do

What the FBI wants to do would make us less secure, even though it’s in the name of keeping us safe from harm. Powerful governments, democratic and totalitarian alike, want access to user data for both law enforcement and social control. We cannot build a backdoor that only works for a particular type of government, or only in the presence of a particular court order.

Either everyone gets security or no one does. Either everyone gets access or no one does. The current case is about a single iPhone 5c, but the precedent it sets will apply to all smartphones, computers, cars and everything the Internet of Things promises. The danger is that the court’s demands will pave the way to the FBI forcing Apple and others to reduce the security levels of their smart phones and computers, as well as the security of cars, medical devices, homes, and everything else that will soon be computerized. The FBI may be targeting the iPhone of the San Bernardino shooter, but its actions imperil us all.

This essay previously appeared in the Washington Post

The original essay contained a major error.

I wrote: “This is why Apple fixed this security flaw in 2014. Apple’s iOS 8.0 and its phones with an A7 or later processor protect the phone’s software as well as the data. If you have a newer iPhone, you are not vulnerable to this attack. You are more secure – from the government of whatever country you’re living in, from cybercriminals and from hackers.” Also: “We are all more secure now that Apple has closed that vulnerability.”

That was based on a misunderstanding of the security changes Apple made in what is known as the “Secure Enclave.” It turns out that all iPhones have this security vulnerability: all can have their software updated without knowing the password. The updated code has to be signed with Apple’s key, of course, which adds a major difficulty to the attack.

Dan Guido writes:

If the device lacks a Secure Enclave, then a single firmware update to iOS will be sufficient to disable passcode delays and auto erase. If the device does contain a Secure Enclave, then two firmware updates, one to iOS and one to the Secure Enclave, are required to disable these security features. The end result in either case is the same. After modification, the device is able to guess passcodes at the fastest speed the hardware supports.

The recovered iPhone is a model 5C. The iPhone 5C lacks TouchID and, therefore, lacks a Secure Enclave. The Secure Enclave is not a concern. Nearly all of the passcode protections are implemented in software by the iOS operating system and are replaceable by a single firmware update.

EDITED TO ADD (2/22): Lots more on my previous blog post on the topic.

How to set a longer iPhone password and thwart this kind of attack. Comey on the issue. And a secret memo describes the FBI’s broader strategy to weaken security.

Orin Kerr’s thoughts: Part 1, Part 2, and Part 3.

EDITED TO ADD (2/22): Tom Cook’s letter to his employees, and an FAQ. How CALEA relates to all this. Here’s what’s not available in the iCloud backup. The FBI told the county to change the password on the phone—that’s why they can’t get in. What the FBI needs is technical expertise, not back doors. And it’s not just this iPhone; the FBI wants Apple to break into lots of them. What China asks of tech companies—not that this is a country we should particularly want to model. Former NSA Director Michael Hayden on the case. There is a quite a bit of detail about the Apple efforts to assist the FBI in the legal motion the Department of Justice filed. Two good essays. Jennifer Granick’s comments.

In my essay, I talk about other countries developing this capability with Apple’s knowledge or consent. Making it work requires stealing a copy of Apple’s code-signing key, something that has been done by the authors of Stuxnet (probably the US) and Flame (probably Russia) in the past.

Posted on February 22, 2016 at 6:58 AM218 Comments

Security Implications of Cash

I saw two related stories today. The first is about high-denomination currency. The EU is considering dropping its 500-euro note, on the grounds that only criminals need to move around that much cash. In response, Switzerland said that it is not dropping its 1,000-Swiss franc note. Of course, the US leads the way in small money here; its biggest banknote is $100.

This probably matters. Moving and laundering cash is at least as big a logistical and legal problem as moving and selling drugs. On the other hand, countries make a profit from their cash in circulation: it’s called seigniorage.

The second story is about the risks associated with legal marijuana dispensaries in the US not being able to write checks, have a bank account, and so on. There’s the physical risk of theft and violence, and the logistical nightmare of having to pay a $100K tax bill with marijuana-smelling paper currency.

Posted on February 19, 2016 at 6:34 AM71 Comments

Underage Hacker Is behind Attacks against US Government

It’s a teenager:

British police have arrested a teenager who allegedly was behind a series of audacious—and, for senior U.S. national security officials, embarrassing—hacks targeting personal accounts or top brass at the CIA, FBI, Homeland Security Department, the White House and other federal agencies, according to U.S. officials briefed on the investigation.

[…]

The prominent victims have included CIA Director John Brennan, whose personal AOL account was breached, the then FBI Deputy Director Mark Giuliano, and James Clapper, the director of National Intelligence.

This week, the latest target became apparent when personal details of 20,000 FBI employees surfaced online.

By then a team of some of the FBI’s sharpest cyber experts had homed in on their suspect, officials said. They were shocked to find that a “16-year-old computer nerd” had done so well to cover his tracks, a U.S. official said. a

Not really surprised, but underscores how diffuse the threat is.

Posted on February 18, 2016 at 6:02 AM36 Comments

Judge Demands that Apple Backdoor an iPhone

A judge has ordered that Apple bypass iPhone security in order for the FBI to attempt a brute-force password attack on an iPhone 5c used by one of the San Bernardino killers. Apple is refusing.

The order is pretty specific technically. This implies to me that what the FBI is asking for is technically possible, and even that Apple assisted in the wording so that the case could be about the legal issues and not the technical ones.

From Apple’s statement about its refusal:

Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case.

In today’s digital world, the “key” to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge.

The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks ­ from restaurants and banks to stores and homes. No reasonable person would find that acceptable.

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers ­ including tens of millions of American citizens ­ from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

Congressman Ted Lieu comments.

Here’s an interesting essay about why Tim Cook and Apple are such champions for encryption and privacy.

Today I walked by a television showing CNN. The sound was off, but I saw an aerial scene which I presume was from San Bernardino, and the words “Apple privacy vs. national security.” If that’s the framing, we lose. I would have preferred to see “National security vs. FBI access.”

Slashdot thread.

EDITED TO ADD (2/18): Good analysis of Apple’s case. Interesting debate. Nicholas Weaver’s comments. And commentary from some other planet.

EDITED TO ADD (2/19): Ben Adida comments:

What’s probably happening is that the FBI is using this as a test case for the general principle that they should be able to compel tech companies to assist in police investigations. And that’s pretty smart, because it’s a pretty good test case: Apple obviously wants to help prevent terrorist attacks, so they’re left to argue the slippery slope argument in the face of an FBI investigation of a known terrorist. Well done, FBI, well done.

And Julian Sanchez’s comments. His conclusion:

These, then, are the high stakes of Apple’s resistance to the FBI’s order: not whether the federal government can read one dead terrorism suspect’s phone, but whether technology companies can be conscripted to undermine global trust in our computing devices. That’s a staggeringly high price to pay for any investigation.

A New York Times editorial.

Also, two questions: One, what do we know about Apple’s assistance in the past, and why this one is different? Two, has anyone speculated on how much this will cost Apple? The FBI is demanding that Apple give them free engineering work. What’s the value of that work?

EDITED TO ADD (2/20): Jonathan Zdziarski writes on the differences between the FBI compelling someone to provide a service versus build a tool, and why the latter will 1) be difficult and expensive, 2) will get out into the wild, and 3) set a dangerous precedent.

This answers my first question, above:

For years, the government could come to Apple with a subpoena and a phone, and have the manufacturer provide a disk image of the device. This largely worked because Apple didn’t have to hack into their phones to do this. Up until iOS 8, the encryption Apple chose to use in their design was easily reversible when you had code execution on the phone (which Apple does). So all through iOS 7, Apple only needed to insert the key into the safe and provide FBI with a copy of the data.

EFF wrote a good technical explainer on the case. My only complaint is with the last section. I have heard directly from Apple that this technique still works on current model phones using the current iOS version.

I am still stunned by how good a case the FBI chose to push this. They have all the sympathy in the media that they could hope for.

EDITED TO ADD (2/20): Tim Cook as privacy advocate. How the back door works on modern iPhones. Why the average American should care. The grugq on what this all means.

EDITED TO ADD (2/22): I wrote an op ed for the Washington Post.

Posted on February 17, 2016 at 2:15 PM222 Comments

Enabling Trust by Consensus

Trust is a complex social phenomenon, captured very poorly by the binary nature of Internet trust systems. This paper proposes a social consensus system of trust: “Do You Believe in Tinker Bell? The Social Externalities of Trust,” by Khaled Baqer and Ross Anderson.

From the abstract:

Inspired by Tinker Bell, we propose a new approach: a trust service whose power arises directly from the number of users who decide to rely on it. Its power is limited to the provision of a single service, and failures to deliver this service should fairly rapidly become evident. As a proof of concept, we present a privacy-preserving reputation system to enhance quality of service in Tor, or a similar proxy network, with built-in incentives for correct behaviour. Tokens enable a node to interact directly with other nodes and are regulated by a distributed authority. Reputation is
directly proportional to the number of tokens a node accumulates. By using blind signatures, we prevent the authority learning which entity has which tokens, so it cannot compromise privacy. Tokens lose value exponentially over time; this negative interest rate discourages hoarding. We demotivate costly system operations using taxes. We propose this reputation system not just as a concrete mechanism for systems requiring robust and privacy-preserving reputation metrics, but also as a thought experiment in how to x the security economics of emergent trust.

Blog post on the paper.

Posted on February 17, 2016 at 5:18 AM15 Comments

Fear and Anxiety

More psychological research on our reaction to terrorism and mass violence:

The researchers collected posts on Twitter made in response to the 2012 shooting attack at Sandy Hook Elementary School in Newtown, Connecticut. They looked at tweets about the school shooting over a five-and-a-half-month period to see whether people used different language in connection with the event depending on how geographically close they were to Newtown, or how much time had elapsed since the tragedy. The analysis showed that the further away people were from the tragedy in either space or time, the less they used words related to sadness (loss, grieve, mourn), suggesting that feelings of sorrow waned with growing psychological distance. But words related to anxiety (crazy, fearful, scared) showed the opposite pattern, increasing in frequency as people gained distance in either time or space from the tragic events. For example, within the first week of the shootings, words expressing sadness accounted for 1.69 percent of all words used in tweets about the event; about five months later, these had dwindled to 0.62 percent. In contrast, anxiety-related words went up from 0.27 percent to 0.62 percent over the same time.

Why does psychological distance mute sadness but incubate anxiety? The authors point out that as people feel more remote from an event, they shift from thinking of it in very concrete terms to more abstract ones, a pattern that has been shown in a number of previous studies. Concrete thoughts highlight the individual lives affected and the horrific details of the tragedy. (Images have >particular power to make us feel the loss of individuals in a mass tragedy.) But when people think about the event abstractly, they’re more apt to focus on its underlying causes, which is anxiety inducing if the cause is seen as arising from an unresolved issue.

This is related.

Posted on February 16, 2016 at 6:27 AM10 Comments

Fitbit Data Reveals Pregnancy

A man learned his wife was pregnant from her Fitbit data.

The details of the story are weird. The man posted the data to Reddit and asked for analysis help. But the point is that the data can reveal pregnancy, and this might not be something a person wants to tell a company who can sell that information for profit.

And remember, retailers want to know if one of their customers is pregnant.

Posted on February 12, 2016 at 12:16 PM13 Comments

Determining Physical Location on the Internet

Interesting research: “CPV: Delay-based Location Verification for the Internet“:

Abstract: The number of location-aware services over the Internet continues growing. Some of these require the client’s geographic location for security-sensitive applications. Examples include location-aware authentication, location-aware access policies, fraud prevention, complying with media licensing, and regulating online gambling/voting. An adversary can evade existing geolocation techniques, e.g., by faking GPS coordinates or employing a non-local IP address through proxy and virtual private networks. We devise Client Presence Verification (CPV), a delay-based verification technique designed to verify an assertion about a device’s presence inside a prescribed geographic region. CPV does not identify devices by their IP addresses. Rather, the device’s location is corroborated in a novel way by leveraging geometric properties of triangles, which prevents an adversary from manipulating measured delays. To achieve high accuracy, CPV mitigates Internet path asymmetry using a novel method to deduce one-way application-layer delays to/from the client’s participating device, and mines these delays for evidence supporting/refuting the asserted location. We evaluate CPV through detailed experiments on PlanetLab, exploring various factors that affect its efficacy, including the granularity of the verified location, and the verification time. Results highlight the potential of CPV for practical adoption.

News articles.

Posted on February 12, 2016 at 6:19 AM17 Comments

Worldwide Encryption Products Survey

Today I released my worldwide survey of encryption products.

The findings of this survey identified 619 entities that sell encryption products. Of those 412, or two-thirds, are outside the U.S.-calling into question the efficacy of any US mandates forcing backdoors for law-enforcement access. It also showed that anyone who wants to avoid US surveillance has over 567 competing products to choose from. These foreign products offer a wide variety of secure applications­—voice encryption, text message encryption, file encryption, network-traffic encryption, anonymous currency­—providing the same levels of security as US products do today.

Details:

  • There are at least 865 hardware or software products incorporating encryption from 55 different countries. This includes 546 encryption products from outside the US, representing two-thirds of the total.
  • The most common non-US country for encryption products is Germany, with 112 products. This is followed by the United Kingdom, Canada, France, and Sweden, in that order.
  • The five most common countries for encryption products­—including the US­—account for two-thirds of the total. But smaller countries like Algeria, Argentina, Belize, the British Virgin Islands, Chile, Cyprus, Estonia, Iraq, Malaysia, St. Kitts and Nevis, Tanzania, and Thailand each produce at least one encryption product.
  • Of the 546 foreign encryption products we found, 56% are available for sale and 44% are free. 66% are proprietary, and 34% are open source. Some for-sale products also have a free version.
  • At least 587 entities­—primarily companies—­either sell or give away encryption products. Of those, 374, or about two-thirds, are outside the US.
  • Of the 546 foreign encryption products, 47 are file encryption products, 68 e-mail encryption products, 104 message encryption products, 35 voice encryption products, and 61 virtual private networking products.

The report is here, here, and here. The data, in Excel form, is here.

Press articles are starting to come in. (Here are the previous blog posts on the effort.)

I know the database is incomplete, and I know there are errors. I welcome both additions and corrections, and will be releasing a 1.1 version of this survey in a few weeks.

EDITED TO ADD (2/13): More news.

Posted on February 11, 2016 at 11:05 AM62 Comments

AT&T Does Not Care about Your Privacy

AT&T’s CEO believes that the company should not offer robust security to its customers:

But tech company leaders aren’t all joining the fight against the deliberate weakening of encryption. AT&T CEO Randall Stephenson said this week that AT&T, Apple, and other tech companies shouldn’t have any say in the debate.

“I don’t think it is Silicon Valley’s decision to make about whether encryption is the right thing to do,” Stephenson said in an interview with The Wall Street Journal. “I understand [Apple CEO] Tim Cook’s decision, but I don’t think it’s his decision to make.”

His position is extreme in its disregard for the privacy of his customers. If he doesn’t believe that companies should have any say in what levels of privacy they offer their customers, you can be sure that AT&T won’t offer any robust privacy or security to you.

Does he have any clue what an anti-market position this is? He says that it is not the business of Silicon Valley companies to offer product features that might annoy the government. The “debate” about what features commercial products should have should happen elsewhere—presumably within the government. I thought we all agreed that state-controlled economies just don’t work.

My guess is that he doesn’t realize what an extreme position he’s taking by saying that product design isn’t the decision of companies to make. My guess is that AT&T is so deep in bed with the NSA and FBI that he’s just saying things he believes justify his position.

Here’s the original, behind a paywall.

Posted on February 10, 2016 at 1:59 PM51 Comments

The 2016 National Threat Assessment

It’s National Threat Assessment Day. Published annually by the Director of National Intelligence, the “Worldwide Threat Assessment of the US Intelligence Community” is the US intelligence community’s one time to publicly talk about the threats in general. The document is the results of weeks of work and input from lots of people. For Clapper, it’s his chance to shape the dialog, set up priorities, and prepare Congress for budget requests. The document is an unclassified summary of a much longer classified document. And the day also includes Clapper testifying before the Senate Armed Service Committee. (You’ll remember his now-famous lie to the committee in 2013.)

The document covers a wide variety of threats, from terrorism to organized crime, from energy politics to climate change. Although the document clearly says “The order of the topics presented in this statement does not necessarily indicate the relative importance or magnitude of the threat in the view of the Intelligence Community,” it does. And like 2015 and 2014, cyber threats are #1—although this year it’s called “Cyber and Technology.”

The consequences of innovation and increased reliance on information technology in the next few years on both our society’s way of life in general and how we in the Intelligence Community specifically perform our mission will probably be far greater in scope and impact than ever. Devices, designed and fielded with minimal security requirements and testing, and an ever—increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US Government systems. These developments will pose challenges to our cyber defenses and operational tradecraft but also create new opportunities for our own intelligence collectors.

Especially note that last clause. The FBI might hate encryption, but the intelligence community is not going dark.

The document then calls out a few specifics like the Internet of Things and Artificial Intelligence—no surprise, considering other recent statements from government officials. This is the “…and Technology” part of the category.

More specifically:

Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decisionmaking, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI ­—in settings such as public utilities and health care—will only exacerbate these potential effects. Russian cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion. Chinese military doctrine outlines the use of cyber deception operations to conceal intentions, modify stored data, transmit false data, manipulate the flow of information, or influence public sentiments -­ all to induce errors and miscalculation in decisionmaking.

Russia is the number one threat, followed by China, Iran, North Korea, and non-state actors:

Russia is assuming a more assertive cyber posture based on its willingness to target critical infrastructure systems and conduct espionage operations even when detected and under increased public scrutiny. Russian cyber operations are likely to target US interests to support several strategic objectives: intelligence gathering to support Russian decisionmaking in the Ukraine and Syrian crises, influence operations to support military and political objectives, and continuing preparation of the cyber environment for future contingencies.

Comments on China refer to the cybersecurity agreement from last September:

China continues to have success in cyber espionage against the US Government, our allies, and US companies. Beijing also selectively uses cyberattacks against targets it believes threaten Chinese domestic stability or regime legitimacy. We will monitor compliance with China’s September 2015 commitment to refrain from conducting or knowingly supporting cyber—enabled theft of intellectual property with the intent of providing competitive advantage to companies or commercial sectors. Private—sector security experts have identified limited ongoing cyber activity from China but have not verified state sponsorship or the use of exfiltrated data for commercial gain.

Also interesting are the comments on non-state actors, which discuss both propaganda campaigns from ISIL, criminal ransomware, and hacker tools.

Posted on February 9, 2016 at 3:25 PM25 Comments

Large-Scale FBI Hacking

As part of a child pornography investigation, the FBI hacked into over 1,300 computers.

But after Playpen was seized, it wasn’t immediately closed down, unlike previous dark web sites that have been shuttered” by law enforcement. Instead, the FBI ran Playpen from its own servers in Newington, Virginia, from February 20 to March 4, reads a complaint filed against a defendant in Utah. During this time, the FBI deployed what is known as a network investigative technique (NIT), the agency’s term for a hacking tool.

While Playpen was being run out of a server in Virginia, and the hacking tool was infecting targets, “approximately 1300 true internet protocol (IP) addresses were identified during this time,” according to the same complaint.

The FBI seems to have obtained a single warrant, but it’s hard to believe that a legal warrant could allow the police to hack 1,300 different computers. We do know that the FBI is very vague about the extent of its operations in warrant applications. And surely we need actual public debate about this sort of technique.

Also, “Playpen” is a super-creepy name for a child porn site. I feel icky just typing it.

Posted on February 9, 2016 at 6:25 AM61 Comments

Data and Goliath Published in Paperback

Today, Data and Goliath is being published in paperback.

Everyone tells me that the paperback version sells better than the hardcover, even though it’s a year later. I can’t really imagine that there are tens of thousands of people who wouldn’t spend $28 on a hardcover but are happy to spend $18 on the paperback, but we’ll see. (Amazon has the hardcover for $19, the paperback for $11.70, and the Kindle edition for $14.60, plus shipping, if any. I am still selling signed hardcovers for $28 including domestic shipping—more for international.)

I got a box of paperbacks from my publisher last week. They look good. Not as good as the hardcover, but good for a trade paperback.

Posted on February 8, 2016 at 2:11 PM17 Comments

Exploiting Google Maps for Fraud

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What’s interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.

And that’s Google’s best option, really. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Posted on February 8, 2016 at 6:52 AM33 Comments

NSA Reorganizing

The NSA is undergoing a major reorganization, combining its attack and defense sides into a single organization:

In place of the Signals Intelligence and Information Assurance directorates ­ the organizations that historically have spied on foreign targets and defended classified networks against spying, respectively ­ the NSA is creating a Directorate of Operations that combines the operational elements of each.

It’s going to be difficult, since their missions and culture are so different.

The Information Assurance Directorate (IAD) seeks to build relationships with private-sector companies and help find vulnerabilities in software ­ most of which officials say wind up being disclosed. It issues software guidance and tests the security of systems to help strengthen their defenses.

But the other side of the NSA house, which looks for vulnerabilities that can be exploited to hack a foreign network, is much more secretive.

“You have this kind of clash between the closed environment of the sigint mission and the need of the information-assurance team to be out there in the public and be seen as part of the solution,” said a second former official. “I think that’s going to be a hard trick to pull off.”

I think this will make it even harder to trust the NSA. In my book Data and Goliath, I recommended separating the attack and defense missions of the NSA even further, breaking up the agency. (I also wrote about that idea here.)

And missing in their reorg is how US CyberCommmand’s offensive and defensive capabilities relate to the NSA’s. That seems pretty important, too.

EDITED TO ADD (2/11): Some more commentary.

EDITED TO ADD (2/13): Another.

Posted on February 5, 2016 at 3:15 PM37 Comments

Tracking Anonymous Web Users

This research shows how to track e-commerce users better across multiple sessions, even when they do not provide unique identifiers such as user IDs or cookies.

Abstract: Targeting individual consumers has become a hallmark of direct and digital marketing, particularly as it has become easier to identify customers as they interact repeatedly with a company. However, across a wide variety of contexts and tracking technologies, companies find that customers can not be consistently identified which leads to a substantial fraction of anonymous visits in any CRM database. We develop a Bayesian imputation approach that allows us to probabilistically assign anonymous sessions to users, while ac- counting for a customer’s demographic information, frequency of interaction with the firm, and activities the customer engages in. Our approach simultaneously estimates a hierarchical model of customer behavior while probabilistically imputing which customers made the anonymous visits. We present both synthetic and real data studies that demonstrate our approach makes more accurate inference about individual customers’ preferences and responsiveness to marketing, relative to common approaches to anonymous visits: nearest- neighbor matching or ignoring the anonymous visits. We show how companies who use the proposed method will be better able to target individual customers, as well as infer how many of the anonymous visits are made by new customers.

Posted on February 5, 2016 at 6:56 AM18 Comments

The Internet of Things Will Be the World's Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web—can I call it WSW?—is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­—and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot “Holos.”

EDITED TO ADD: Commentary.

EDITED TO ADD: This essay has been translated into Hebrew.

Posted on February 4, 2016 at 6:18 AM50 Comments

Security vs. Surveillance

Both the “going dark” metaphor of FBI Director James Comey and the contrasting “golden age of surveillance” metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It’s a myopic framing that focuses only on one threat—criminals, including domestic terrorists—and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It’s a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers—probably the Chinese government—stole 21.5 million personal files of U.S. government employees and others. They wouldn’t have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn’t encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece’s cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We’re not being asked to choose between security and privacy. We’re being asked to choose between less security and more security.

This trade-off isn’t new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book Surveillance or Security?, which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener—whether a foreign government, criminal, or terrorist—to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

This essay previously appeared as part of the paper “Don’t Panic: Making Progress on the ‘Going Dark’ Debate.” It was reprinted on Lawfare. A modified version was reprinted by the MIT Technology Review.

Posted on February 3, 2016 at 6:09 AM33 Comments

More Details on the NSA Switching to Quantum-Resistant Cryptography

The NSA is publicly moving away from cryptographic algorithms vulnerable to cryptanalysis using a quantum computer. It just published a FAQ about the process:

Q: Is there a quantum resistant public-key algorithm that commercial vendors should adopt?

A: While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. Once these algorithms have been standardized, NSA will require vendors selling to NSS operators to provide FIPS validated implementations in their products. Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. NSA does not recommend implementing or using non-standard algorithms, and the field of quantum resistant cryptography is no exception.

[…]

Q: When will quantum resistant cryptography be available?

A: For systems that will use unclassified cryptographic algorithms it is vital that NSA use cryptography that is widely accepted and widely available as part of standard commercial offerings vetted through NIST’s cryptographic standards development process. NSA will continue to support NIST in the standardization process and will also encourage work in the vendor and larger standards communities to help produce standards with broad support for deployment in NSS. NSA believes that NIST can lead a robust and transparent process for the standardization of publicly developed and vetted algorithms, and we encourage this process to begin soon. NSA believes that the external cryptographic community can develop quantum resistant algorithms and reach broad agreement for standardization within a few years.

Lots of other interesting stuff in the Q&A.

Posted on February 2, 2016 at 7:11 AM22 Comments

NSA's TAO Head on Internet Offense and Defense

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here’s a video of the talk, and here are two good summaries.

Intrusion Phases

  • Reconnaissance
  • Initial Exploitation
  • Establish Persistence
  • Install Tools
  • Move Laterally
  • Collect Exfil and Exploit

The event was the USENIX Enigma Conference.

The talk is full of good information about how APT attacks work and how networks can defend themselves. Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn’t want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US—and presumably allied—networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won’t slow it down significantly. And the Chinese/Russian/etc state-sponsored attackers will have a harder time. Or, at least, that’s what the NSA wants us to believe.

Wheels within wheels….

More information about the NSA’s TAO group is here and here. Here’s an article about TAO’s catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years.

BoingBoing post.

EDITED TO ADD (2/2): I was talking with Nicholas Weaver, and he said that he found these three points interesting:

  • A one-way monitoring system really gives them headaches, because it allows the defender to go back after the fact and see what happened, remove malware, etc.
  • The critical component of APT is the P: persistence. They will just keep trying, trying, and trying. If you have a temporary vulnerability—the window between a vulnerability and a patch, temporarily turning off a defense—they’ll exploit it.
  • Trust them when they attribute an attack (e,g: Sony) on the record. Attribution is hard, but when they can attribute they know for sure—and they don’t attribute lightly.

Posted on February 1, 2016 at 6:42 AM37 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.