Crypto-Gram

February 15, 2016

by Bruce Schneier
CTO, Resilient Systems, Inc.
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2016/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


The Internet of Things Will Be the World’s Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web—can I call it WSW?—is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated—and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.
http://www.forbes.com/sites/bruceschneier/2016/02/…

Ryan Calo on the Federal Robotics Commission:
http://papers.ssrn.com/sol3/papers.cfm?…

Japan and Korea:
http://japan.kantei.go.jp/97_abe/actions/201505/…
http://www.roboticsbusinessreview.com/article/…

Kevin Kelly has also thought along these lines, calling the robot “Holos.”
http://longnow.org/seminars/02014/nov/12/…

Commentary:
https://resilient.com/…


Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage—once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day—there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.
http://edition.cnn.com/2016/01/26/opinions/…

Hacking airplanes:
http://www.wired.com/2015/05/…

Hacking medical devices:
http://www.informationweek.com/partner-perspectives/…

Hacking thermostats:
http://www.networkworld.com/article/2905053/…

James Clapper’s remarks:
http://www.scmagazine.com/…

Mike Rogers remarks:
http://thehill.com/policy/cybersecurity/…

These threats are larger and more important:
https://www.schneier.com/blog/archives/2015/03/…

$4 billion for driverless cars
http://www.wsj.com/articles/…


Security vs. Surveillance

Both the “going dark” metaphor of FBI Director James Comey and the contrasting “golden age of surveillance” metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It’s a myopic framing that focuses only on one threat—criminals, including domestic terrorists—and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It’s a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers—probably the Chinese government—stole 21.5 million personal files of U.S. government employees and others. They wouldn’t have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn’t encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece’s cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We’re not being asked to choose between security and privacy. We’re being asked to choose between less security and more security.

This trade-off isn’t new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book “Surveillance or Security?” which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener—whether a foreign government, criminal, or terrorist—to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

This essay previously appeared as part of the paper “Don’t Panic: Making Progress on the ‘Going Dark’ Debate”—see below. It was reprinted on Lawfare. A modified version was reprinted by the “MIT Technology Review.”
https://www.lawfareblog.com/security-or-surveillance
http://www.technologyreview.com/news/545716/…

“Going dark” metaphor:
https://www.fbi.gov/news/speeches/…

“Golden age of surveillance” metaphor:
https://www.judiciary.senate.gov/imo/media/doc/…

Why we can’t build secure backdoors:
http://dspace.mit.edu/handle/1721.1/97690

Breaches of backdoors:
http://spectrum.ieee.org/telecom/security/…
https://www.washingtonpost.com/world/…

Mid-1990s anti-key-escrow argument:
https://www.schneier.com/paper-key-escrow.html


Paper on the Going Dark Debate

I am pleased to have been a part of this report, part of the Berkman Center’s Berklett Cybersecurity project:

Don’t Panic: Making Progress on the “Going Dark” Debate

From the report:

In this report, we question whether the “going dark” metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not. The question we explore is the significance of this lack of access to communications for legitimate government interests. We argue that communications in the future will neither be eclipsed into darkness nor illuminated without shadow.

In short our findings are:

* End-to-end encryption and other technological architectures for obscuring user data are unlikely to be adopted ubiquitously by companies, because the majority of businesses that provide communications services rely on access to user data for revenue streams and product functionality, including user data recovery should a password be forgotten.

* Software ecosystems tend to be fragmented. In order for encryption to become both widespread and comprehensive, far more coordination and standardization than currently exists would be required.

* Networked sensors and the Internet of Things are projected to grow substantially, and this has the potential to drastically change surveillance. The still images, video, and audio captured by these devices may enable real-time intercept and recording with after-the-fact access. Thus an inability to monitor an encrypted channel could be mitigated by the ability to monitor from afar a person through a different channel.

* Metadata is not encrypted, and the vast majority is likely to remain so. This is data that needs to stay unencrypted in order for the systems to operate: location data from cell phones and other devices, telephone calling records, header information in e-mail, and so on. This information provides an enormous amount of surveillance data that was unavailable before these systems became widespread.

* These trends raise novel questions about how we will protect individual privacy and security in the future. Today’s debate is important, but for all its efforts to take account of technological trends, it is largely taking place without reference to the full picture.

https://cyber.law.harvard.edu/pubrelease/dont-panic/

News coverage:
http://www.nytimes.com/2016/02/01/us/politics/…
https://theintercept.com/2016/02/01/…
http://mashable.com/2016/02/01/encryption-going-dark/
http://www.theverge.com/2016/2/1/10887838/…
https://www.rt.com/usa/…
http://www.newsfactor.com/story.xhtml?…
http://yro.slashdot.org/story/16/02/01/2249234/…
https://boingboing.net/2016/02/01/…
http://s.wsj.com/cio/2016/02/02/…
http://www.theatlantic.com/technology/archive/2016/…
http://www.npr.org/sections/alltechconsidered/2016/…
http://arstechnica.com/tech-policy/2016/02/…
http://techcrunch.com/2016/02/01/…
http://thehill.com/policy/cybersecurity/…
http://gizmodo.com/…
http://www.csmonitor.com/Technology/2016/0202/…
http://www.ibtimes.com/…
http://www.pcworld.com/article/3028042/security/…
http://betanews.com/2016/02/01/…


News

Jonathan Zittrain proposes a very interesting hypothetical about bulk searching cloud archives of personal data. Lots of good discussion in the blog comments.
https://www.schneier.com/blog/archives/2016/01/…

The BBC and Buzzfeed are jointly reporting on match fixing in tennis. Their story is based partially on leaked documents and partly on data analysis.
http://www.bbc.com/sport/tennis/35319202
http://www.buzzfeed.com/heidiblake/the-tennis-racket
http://www.buzzfeed.com/johntemplon/…
This is also an issue in professional sumo wrestling.
https://en.wikipedia.org/wiki/…

Counterfeiters are making tickets for the Broadway show “Hamilton.” Counterfeiting is much easier when the person you’re passing the fakes off to doesn’t know what the real thing is supposed to look like.
http://www.nytimes.com/2016/01/18/nyregion/…

Last July, a still-anonymous hacker broke into the network belonging to the cyberweapons arms manufacturer Hacking Team, and dumped an enormous amount of its proprietary documents online. Kaspersky Labs was able to reverse-engineer one of its zero-day exploits from that data.
http://www.wired.com/2016/01/…

France rejects backdoors in encryption products. And for the right reasons, too.
http://www.theregister.co.uk/2016/01/15/…
France joins the Netherlands on this issue.
https://www.schneier.com/blog/archives/2016/01/…
Apple’s Tim Cook is going after the Obama administration on the issue.
https://theintercept.com/2016/01/12/…
In related news, Congress will introduce a bill to establish a commission to study the issue. This is what kicking the can down the road looks like.
http://thehill.com/policy/cybersecurity/…

More on El Chapo’s opsec:
https://www.schneier.com/blog/archives/2016/01/…

Interesting research on the security trade-offs between the longbow and the crossbow.
http://www.peterleeson.com/Longbow.pdf
It’s nice to see my security interests intersect with my D&D interests.

The UK government is pushing something called the MIKEY-SAKKE protocol to secure voice. Basically, it’s an identity-based system that necessarily requires a trusted key-distribution center. So key escrow is inherently built in, and there’s no perfect forward secrecy. The only reasonable explanation for designing a protocol with these properties is third-party eavesdropping. And GCHQ previously rejected a more secure standard, MIKEY-IBAKE, because it didn’t allow undetectable spying. Both the NSA and GCHQ repeatedly choose surveillance over security. We need to reject that decision.
https://www.benthamsgaze.org/2016/01/19/…
http://www.theregister.co.uk/2016/01/21/mikey_ibake/

Shodan lets you browse for insecure webcams.
http://arstechnica.com/security/2016/01/…
http://search.slashdot.org/story/16/01/24/0256224/…

Good article on data-driven policing.
https://www.washingtonpost.com/local/public-safety/…

Horrible story of digital harassment.
http://fusion.net/story/212802/…
We need to figure out how to identify perpetrators like this without destroying Internet privacy in the process. One of the important points is the international nature of many of these cases. Even once the attackers are identified, the existing legal system isn’t adequate for shutting them down.

Psychological model of selfishness.
https://www.schneier.com/blog/archives/2016/01/…

“Support our Snoops” comic.
https://recode.net/2016/01/20/…

The NSA and GCHQ have successfully hacked Israel’s drones, according to the Snowden documents.
https://theintercept.com/2016/01/28/…
http://www.spiegel.de/politik/ausland/…
http://www.timesofisrael.com/…
http://samvartaka.github.io/cryptanalysis/2016/02/…

The NSA is publicly moving away from cryptographic algorithms vulnerable to cryptanalysis using a quantum computer. It just published a FAQ about the process:
https://www.iad.gov/iad/library/ia-guidance/…

This research shows how to track e-commerce users better across multiple sessions, even when they do not provide unique identifiers such as user IDs or cookies.
http://papers.ssrn.com/sol3/papers.cfm?…

The NSA is undergoing a major reorganization, combining its attack and defense sides into a single organization. I think this will make it even harder to trust the NSA. In my book “Data and Goliath,” I recommended separating the attack and defense missions of the NSA even further, breaking up the agency. And missing in their reorg is how US CyberCommmand’s offensive and defensive capabilities relate to the NSA’s. That seems pretty important, too.
https://www.washingtonpost.com/world/…
Me on breaking up the NSA:
https://www.schneier.com/blog/archives/2014/02/…
Commentary on the reorganization.
https://www.washingtonpost.com/news/monkey-cage/wp/…
http://www.csmonitor.com/World/Passcode/…
https://www.lawfareblog.com/…
https://www.lawfareblog.com/…

The “New York Times” has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.
http://www.nytimes.com/2016/01/31/business/…
Google isn’t really trying to fix the problem, and that’s its best strategy. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.

As part of a child pornography investigation, the FBI hacked into over 1,300 computers. The FBI seems to have obtained a single warrant, but it’s hard to believe that a legal warrant could allow the police to hack 1,300 different computers. We do know that the FBI is very vague about the extent of its operations in warrant applications. And surely we need actual public debate about this sort of technique.
https://motherboard.vice.com/read/… https://motherboard.vice.com/read/…

Evidence of primitive warfare from Kenya’s Rift Valley.
http://www.theguardian.com/science/2016/jan/20/…

EPIC has just launched “Data Protection 2016” to try to make privacy an issue in this year’s elections.
http://dataprotection2016.org/
You can buy swag.
http://www.cafepress.com/dataprotection2016

Interesting research on determining physical location on the Internet.
https://sce.carleton.ca/~abdou/CPV_TDSC.pdf
http://www.theglobeandmail.com/technology/…
http://www.globalnerdy.com/2016/02/08/…

A man learned his wife was pregnant from her Fitbit data. The details of the story are weird. The man posted the data to Reddit and asked for analysis help. But the point is that the data can reveal pregnancy, and this might not be something a person wants to tell a company who can sell that information for profit.
http://mashable.com/2016/02/10/fitbit-pregnant/…
And remember, retailers want to know if one of their customers is pregnant.
http://www.nytimes.com/2012/02/19/magazine/…

Interesting paper on the dark web: Daniel Moore and Thomas Rid, “Cryptopolitik and the Darknet.” They conclude that it’s mostly used for illegal activity. No surprise, really, but it’s good to have actual research to back it up.
https://www.schneier.com/blog/archives/2016/02/…


The 2016 National Threat Assessment

Published annually by the Director of National Intelligence, the “Worldwide Threat Assessment of the US Intelligence Community” is the US intelligence community’s one time to publicly talk about the threats in general. The document is the results of weeks of work and input from lots of people. For Clapper, it’s his chance to shape the dialog, set up priorities, and prepare Congress for budget requests. The document is an unclassified summary of a much longer classified document. And the day also includes Clapper testifying before the Senate Armed Service Committee. (You’ll remember his now-famous lie to the committee in 2013.)

The document covers a wide variety of threats, from terrorism to organized crime, from energy politics to climate change. Although the document clearly says “The order of the topics presented in this statement does not necessarily indicate the relative importance or magnitude of the threat in the view of the Intelligence Community,” it does. And like 2015 and 2014, cyber threats are #1—although this year it’s called “Cyber and Technology.”

The consequences of innovation and increased reliance on information technology in the next few years on both our society’s way of life in general and how we in the Intelligence Community specifically perform our mission will probably be far greater in scope and impact than ever. Devices, designed and fielded with minimal security requirements and testing, and an ever—increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US Government systems. These developments will pose challenges to our cyber defenses and operational tradecraft but also create new opportunities for our own intelligence collectors.

Especially note that last clause. The FBI might hate encryption, but the intelligence community is not going dark.

The document then calls out a few specifics like the Internet of Things and Artificial Intelligence—no surprise, considering other recent statements from government officials. This is the “…and Technology” part of the category.

More specifically:

Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decisionmaking, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI—in settings such as public utilities and health care—will only exacerbate these potential effects. Russian cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion. Chinese military doctrine outlines the use of cyber deception operations to conceal intentions, modify stored data, transmit false data, manipulate the flow of information, or influence public sentiments—all to induce errors and miscalculation in decisionmaking.

Russia is the number one threat, followed by China, Iran, North Korea, and non-state actors:

Russia is assuming a more assertive cyber posture based on its willingness to target critical infrastructure systems and conduct espionage operations even when detected and under increased public scrutiny. Russian cyber operations are likely to target US interests to support several strategic objectives: intelligence gathering to support Russian decisionmaking in the Ukraine and Syrian crises, influence operations to support military and political objectives, and continuing preparation of the cyber environment for future contingencies.

Comments on China refer to the cybersecurity agreement from last September:

China continues to have success in cyber espionage against the US Government, our allies, and US companies. Beijing also selectively uses cyberattacks against targets it believes threaten Chinese domestic stability or regime legitimacy. We will monitor compliance with China’s September 2015 commitment to refrain from conducting or knowingly supporting cyber—enabled theft of intellectual property with the intent of providing competitive advantage to companies or commercial sectors. Private—sector security experts have identified limited ongoing cyber activity from China but have not verified state sponsorship or the use of exfiltrated data for commercial gain.

Also interesting are the comments on non-state actors, which discuss both propaganda campaigns from ISIL, criminal ransomware, and hacker tools.

2016 Worldwide Threat Assessment of the US Intelligence Community:
http://www.dni.gov/files/documents/…

2015’s assessment:
http://www.dni.gov/files/documents/…

2014’s assessment:
http://www.dni.gov/files/documents/…

Clapper lying in 2014:
http://www.politifact.com/truth-o-meter/article/…

FBI vs NSA on encryption and backdoors:
https://www.fbi.gov/news/testimony/…
https://lawfareblog.com/…

Threats from the Internet of Things:
http://www.scmagazine.com/…
http://thehill.com/policy/cybersecurity/…

US-China cybersecurity agreement:
http://www.cnbc.com/2015/09/25/…
http://www.theverge.com/2015/9/25/9399187/…


AT&T Does Not Care about Your Privacy

AT&T’s CEO believes that the company should not offer robust security to its customers:

But tech company leaders aren’t all joining the fight against the deliberate weakening of encryption. AT&T CEO Randall Stephenson said this week that AT&T, Apple, and other tech companies shouldn’t have any say in the debate.

“I don’t think it is Silicon Valley’s decision to make about whether encryption is the right thing to do,” Stephenson said in an interview with “The Wall Street Journal.” “I understand [Apple CEO] Tim Cook’s decision, but I don’t think it’s his decision to make.”

His position is extreme in its disregard for the privacy of his customers. If he doesn’t believe that companies should have any say in what levels of privacy they offer their customers, you can be sure that AT&T won’t offer any robust privacy or security to you.

Does he have any clue what an anti-market position this is? He says that it is not the business of Silicon Valley companies to offer product features that might annoy the government. The “debate” about what features commercial products should have should happen elsewhere—presumably within the government. I thought we all agreed that state-controlled economies just don’t work.

My guess is that he doesn’t realize what an extreme position he’s taking by saying that product design isn’t the decision of companies to make. My guess is that AT&T is so deep in bed with the NSA and FBI that he’s just saying things he believes justify his position.

http://arstechnica.com/tech-policy/2016/01/…
http://www.wsj.com/articles/…
https://www.propublica.org/article/…


Schneier News

I’m speaking at the RSA Conference in San Francisco on Mar 2:
https://www.rsaconference.com/events/us16/agenda/…

My company, Resilient Systems, is exhibiting.
http://www.rsaconference.com/events/us16/…

I was interviewed on video by Michael Dukakis for the Boston Global Forum:
https://www.schneier.com/news/archives/2016/02/…

A podcast interview with me for the Technoskeptic:
https://www.schneier.com/news/archives/2015/12/…


“Data and Goliath” Published in Paperback

This month, “Data and Goliath” is being published in paperback. Everyone tells me that the paperback version sells better than the hardcover, even though it’s a year later. I can’t really imagine that there are tens of thousands of people who wouldn’t spend $28 on a hardcover but are happy to spend $18 on the paperback, but we’ll see. (Amazon has the hardcover for $19, the paperback for $11.70, and the Kindle edition for $14.60, plus shipping, if any. I am still selling signed hardcovers for $28 including domestic shipping—more for international.)

I got a box of paperbacks from my publisher last week. They look good. Not as good as the hardcover, but good for a trade paperback.

https://www.schneier.com/books/data-and-goliath/

Ordering a signed hardcover from me:
https://www.schneier.com/books/data-and-goliath/…


NSA’s TAO Head on Internet Offense and Defense

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here are his “Intrusion Phases”: Reconnaissance, Initial Exploitation, Establish Persistence, Install Tools, Move Laterally, Collect Exfil, and Exploit.

The talk is full of good information about how APT attacks work and how networks can defend themselves.

I was talking with Nicholas Weaver, and he said that he found these three points interesting:

1. A one-way monitoring system really gives them headaches,
because it allows the defender to go back after the fact and
see what happened, remove malware, etc.

2. The critical component of APT is the P: persistence. They
will just keep trying, trying, and trying. If you have a
temporary vulnerability—the window between a vulnerability
and a patch, temporarily turning off a defense—they’ll
exploit it.

3. Trust them when they attribute an attack (e.g.: Sony) on the
record. Attribution is hard, but when they can attribute they
know for sure—and they don’t attribute lightly.

Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn’t want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US—and presumably allied—networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won’t slow it down significantly. And the Chinese/Russian/etc. state-sponsored attackers will have a harder time. Or, at least, that’s what the NSA wants us to believe.

Wheels within wheels….

https://www.youtube.com/watch?v=bDJb8WOJYdA
http://www.wired.com/2016/01/…
http://www.theregister.co.uk/2016/01/28/…

More information about the NSA’s TAO.
http://www.spiegel.de/international/world/…
https://foreignpolicy.com/2013/06/10/…
An article about TAO’s catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years.
http://www.spiegel.de/international/world/…
http://leaksource.info/2013/12/30/…


Worldwide Encryption Products Survey

This week, I released my worldwide survey of encryption products.

The findings of this survey identified 619 entities that sell encryption products. Of those 412, or two-thirds, are outside the US-calling into question the efficacy of any US mandates forcing backdoors for law-enforcement access. It also showed that anyone who wants to avoid US surveillance has over 567 competing products to choose from. These foreign products offer a wide variety of secure applications—voice encryption, text message encryption, file encryption, network-traffic encryption, anonymous currency—providing the same levels of security as US products do today.

Details:

* There are at least 865 hardware or software products incorporating encryption from 55 different countries. This includes 546 encryption products from outside the US, representing two-thirds of the total.

* The most common non-US country for encryption products is Germany, with 112 products. This is followed by the United Kingdom, Canada, France, and Sweden, in that order.

* The five most common countries for encryption products—including the US—account for two-thirds of the total. But smaller countries like Algeria, Argentina, Belize, the British Virgin Islands, Chile, Cyprus, Estonia, Iraq, Malaysia, St. Kitts and Nevis, Tanzania, and Thailand each produce at least one encryption product.

* Of the 546 foreign encryption products we found, 56% are available for sale and 44% are free. 66% are proprietary, and 34% are open source. Some for-sale products also have a free version.

* At least 587 entities—primarily companies—either sell or give away encryption products. Of those, 374, or about two-thirds, are outside the US.

* Of the 546 foreign encryption products, 47 are file encryption products, 68 e-mail encryption products, 104 message encryption products, 35 voice encryption products, and 61 virtual private networking products.

I know the database is incomplete, and I know there are errors. I welcome both additions and corrections, and will be releasing a 1.1 version of this survey in a few weeks.

The report:
https://www.schneier.com/cryptography/paperfiles/…

The data:
https://www.schneier.com/cryptography/paperfiles/…

Press articles:
http://arstechnica.com/tech-policy/2016/02/…
https://theintercept.com/2016/02/11/…
http://www.theregister.co.uk/2016/02/11/…
http://www.forbes.com/sites/ygrauer/2016/02/11/…
http://www.theverge.com/2016/2/11/10964172/…
http://www.cso.com.au/article/593840/…
http://www.csmonitor.com/World/Passcode/2016/0211/…
http://www.net-security.org/secworld.php?id=19433

Old blog posts on the project:
https://www.schneier.com/blog/archives/2015/09/…
https://www.schneier.com/blog/archives/2015/12/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2016 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.