The NSA is undergoing a major reorganization, combining its attack and defense sides into a single organization:
In place of the Signals Intelligence and Information Assurance directorates the organizations that historically have spied on foreign targets and defended classified networks against spying, respectively the NSA is creating a Directorate of Operations that combines the operational elements of each.
It's going to be difficult, since their missions and culture are so different.
The Information Assurance Directorate (IAD) seeks to build relationships with private-sector companies and help find vulnerabilities in software most of which officials say wind up being disclosed. It issues software guidance and tests the security of systems to help strengthen their defenses.
But the other side of the NSA house, which looks for vulnerabilities that can be exploited to hack a foreign network, is much more secretive.
"You have this kind of clash between the closed environment of the sigint mission and the need of the information-assurance team to be out there in the public and be seen as part of the solution," said a second former official. "I think that's going to be a hard trick to pull off."
I think this will make it even harder to trust the NSA. In my book Data and Goliath, I recommended separating the attack and defense missions of the NSA even further, breaking up the agency. (I also wrote about that idea here.)
And missing in their reorg is how US CyberCommmand's offensive and defensive capabilities relate to the NSA's. That seems pretty important, too.
This research shows how to track e-commerce users better across multiple sessions, even when they do not provide unique identifiers such as user IDs or cookies.
Abstract: Targeting individual consumers has become a hallmark of direct and digital marketing, particularly as it has become easier to identify customers as they interact repeatedly with a company. However, across a wide variety of contexts and tracking technologies, companies find that customers can not be consistently identified which leads to a substantial fraction of anonymous visits in any CRM database. We develop a Bayesian imputation approach that allows us to probabilistically assign anonymous sessions to users, while ac- counting for a customer's demographic information, frequency of interaction with the firm, and activities the customer engages in. Our approach simultaneously estimates a hierarchical model of customer behavior while probabilistically imputing which customers made the anonymous visits. We present both synthetic and real data studies that demonstrate our approach makes more accurate inference about individual customers' preferences and responsiveness to marketing, relative to common approaches to anonymous visits: nearest- neighbor matching or ignoring the anonymous visits. We show how companies who use the proposed method will be better able to target individual customers, as well as infer how many of the anonymous visits are made by new customers.
The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.
These "things" will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about -- and eventually who they are. Sensors will collect environmental data from all over the world.
The other part will be actuators. They'll affect our environment. Our smart thermostats aren't collecting information about ambient temperature and who's in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they're linked to driverless cars, they'll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.
Increasingly, human intervention will be unnecessary. The sensors will collect data. The system's smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.
We're building a world-sized robot, and we don't even realize it.
I've started calling this robot the World-Sized Web.
The World-Sized Web -- can I call it WSW? -- is more than just the Internet of Things. Much of the WSW's brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It's mobile, of course, because many of these things will move around with us, like our smartphones. And it's persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.
None of these technologies are new, but they're all becoming more prevalent. I believe that we're at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That's the robot that is the WSW.
This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will "decide" where to go, what data to collect, and what to do.
We're already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.
By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that's why we're building it. But it will change our society in ways we can't predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.
These changes are inherently unpredictable, because they're based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it's easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.
Maybe the WSW is more like an organism. It won't have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.
We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated -- and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.
Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it's about how smart they'll be when they do it.
The WSW is being built right now, without anyone noticing, and it'll be here before we know it. Whatever changes it means for society, we don't want it to take us by surprise.
This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.
EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot "Holos."
EDITED TO ADD: Commentary.
Both the "going dark" metaphor of FBI Director James Comey and the contrasting "golden age of surveillance" metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.
It's a myopic framing that focuses only on one threat -- criminals, including domestic terrorists -- and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.
Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It's a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.
Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers -- probably the Chinese government -- stole 21.5 million personal files of U.S. government employees and others. They wouldn't have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn't encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.
Adding backdoors will only exacerbate the risks. As technologists, we can't build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer's hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece's cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.
We're not being asked to choose between security and privacy. We're being asked to choose between less security and more security.
This trade-off isn't new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book Surveillance or Security?, which deftly parsed the details of this trade-off and concluded that security is far more important.
Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they're in. If they're not, it's because you're not high enough on their priority list to bother with. Widespread encryption forces the listener -- whether a foreign government, criminal, or terrorist -- to target. And this hurts repressive governments much more than it hurts terrorists and criminals.
Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society's capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we'll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.
This essay previously appeared as part of the paper "Don't Panic: Making Progress on the 'Going Dark' Debate." It was reprinted on Lawfare. A modified version was reprinted by the MIT Technology Review.
Don't Panic: Making Progress on the "Going Dark" Debate
From the report:
In this report, we question whether the "going dark" metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not. The question we explore is the significance of this lack of access to communications for legitimate government interests. We argue that communications in the future will neither be eclipsed into darkness nor illuminated without shadow.
In short our findings are:
- End-to-end encryption and other technological architectures for obscuring user data are unlikely to be adopted ubiquitously by companies, because the majority of businesses that provide communications services rely on access to user data for revenue streams and product functionality, including user data recovery should a password be forgotten.
- Software ecosystems tend to be fragmented. In order for encryption to become both widespread and comprehensive, far more coordination and standardization than currently exists would be required.
- Networked sensors and the Internet of Things are projected to grow substantially, and this has the potential to drastically change surveillance. The still images, video, and audio captured by these devices may enable real-time intercept and recording with after-the-fact access. Thus an inability to monitor an encrypted channel could be mitigated by the ability to monitor from afar a person through a different channel.
- Metadata is not encrypted, and the vast majority is likely to remain so. This is data that needs to stay unencrypted in order for the systems to operate: location data from cell phones and other devices, telephone calling records, header information in e-mail, and so on. This information provides an enormous amount of surveillance data that widespread.
- These trends raise novel questions about how we will protect individual privacy and security in the future. Today's debate is important, but for all its efforts to take account of technological trends, it is largely taking place without reference to the full picture.
Q: Is there a quantum resistant public-key algorithm that commercial vendors should adopt?
A: While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. Once these algorithms have been standardized, NSA will require vendors selling to NSS operators to provide FIPS validated implementations in their products. Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. NSA does not recommend implementing or using non-standard algorithms, and the field of quantum resistant cryptography is no exception.
Q: When will quantum resistant cryptography be available?
A: For systems that will use unclassified cryptographic algorithms it is vital that NSA use cryptography that is widely accepted and widely available as part of standard commercial offerings vetted through NIST's cryptographic standards development process. NSA will continue to support NIST in the standardization process and will also encourage work in the vendor and larger standards communities to help produce standards with broad support for deployment in NSS. NSA believes that NIST can lead a robust and transparent process for the standardization of publicly developed and vetted algorithms, and we encourage this process to begin soon. NSA believes that the external cryptographic community can develop quantum resistant algorithms and reach broad agreement for standardization within a few years.
Lots of other interesting stuff in the Q&A.
Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) group -- basically the country's chief hacker -- spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here's a video of the talk, and here are two good summaries.
- Initial Exploitation
- Establish Persistence
- Install Tools
- Move Laterally
- Collect Exfil and Exploit
The event was the USENIX Enigma Conference.
The talk is full of good information about how APT attacks work and how networks can defend themselves. Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn't want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US -- and presumably allied -- networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won't slow it down significantly. And the Chinese/Russian/etc state-sponsored attackers will have a harder time. Or, at least, that's what the NSA wants us to believe.
Wheels within wheels....
More information about the NSA's TAO group is here and here. Here's an article about TAO's catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years.
EDITED TO ADD (2/2): I was talking with Nicholas Weaver, and he said that he found these three points interesting:
- A one-way monitoring system really gives them headaches, because it allows the defender to go back after the fact and see what happened, remove malware, etc.
- The critical component of APT is the P: persistence. They will just keep trying, trying, and trying. If you have a temporary vulnerability -- the window between a vulnerability and a patch, temporarily turning off a defense -- they'll exploit it.
- Trust them when they attribute an attack (e,g: Sony) on the record. Attribution is hard, but when they can attribute they know for sure -- and they don't attribute lightly.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.