January 15, 2014
by Bruce Schneier
CTO, Co3 Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1401.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- How the NSA Threatens National Security
- NSA Exploit of the Day
- Tor User Identified by FBI
- Security Risks of Embedded Systems
- Schneier News
- Schneier News: I’ve Joined Co3 Systems
- Twitter Users: Please Make Sure You’re Following the Right Feed
Secret NSA eavesdropping is still in the news. Details about once secret programs continue to leak. The Director of National Intelligence has recently declassified additional information, and the President’s Review Group has just released its report and recommendations.
With all this going on, it’s easy to become inured to the breadth and depth of the NSA’s activities. But through the disclosures, we’ve learned an enormous amount about the agency’s capabilities, how it is failing to protect us, and what we need to do to regain security in the Information Age.
First and foremost, the surveillance state is robust. It is robust politically, legally, and technically. I can name three different NSA programs to collect Gmail user data. These programs are based on three different technical eavesdropping capabilities. They rely on three different legal authorities. They involve collaborations with three different companies. And this is just Gmail. The same is true for cell phone call records, Internet chats, cell-phone location data.
Second, the NSA continues to lie about its capabilities. It hides behind tortured interpretations of words like “collect,” “incidentally,” “target,” and “directed.” It cloaks programs in multiple code names to obscure their full extent and capabilities. Officials testify that a particular surveillance activity is not done under one particular program or authority, conveniently omitting that it is done under some other program or authority.
Third, US government surveillance is not just about the NSA. The Snowden documents have given us extraordinary details about the NSA’s activities, but we now know that the CIA, NRO, FBI, DEA, and local police all engage in ubiquitous surveillance using the same sorts of eavesdropping tools, and that they regularly share information with each other.
The NSA’s collect-everything mentality is largely a hold-over from the Cold War, when a voyeuristic interest in the Soviet Union was the norm. Still, it is unclear how effective targeted surveillance against “enemy” countries really is. Even when we learn actual secrets, as we did regarding Syria’s use of chemical weapons earlier this year, we often can’t do anything with the information.
Ubiquitous surveillance should have died with the fall of Communism, but it got a new—and even more dangerous—life with the intelligence community’s post-9/11 “never again” terrorism mission. This quixotic goal of preventing something from happening forces us to try to know everything that does happen. This pushes the NSA to eavesdrop on online gaming worlds and on every cell phone in the world. But it’s a fool’s errand; there are simply too many ways to communicate.
We have no evidence that any of this surveillance makes us safer. NSA Director General Keith Alexander responded to these stories in June by claiming that he disrupted 54 terrorist plots. In October, he revised that number downward to 13, and then to “one or two.” At this point, the only “plot” prevented was that of a San Diego man sending $8,500 to support a Somali militant group. We have been repeatedly told that these surveillance programs would have been able to stop 9/11, yet the NSA didn’t detect the Boston bombings—even though one of the two terrorists was on the watch list and the other had a sloppy social media trail. Bulk collection of data and metadata is an ineffective counterterrorism tool.
Not only is ubiquitous surveillance ineffective, it is extraordinarily costly. I don’t mean just the budgets, which will continue to skyrocket. Or the diplomatic costs, as country after country learns of our surveillance programs against their citizens. I’m also talking about the cost to our society. It breaks so much of what our society has built. It breaks our political systems, as Congress is unable to provide any meaningful oversight and citizens are kept in the dark about what government does. It breaks our legal systems, as laws are ignored or reinterpreted, and people are unable to challenge government actions in court. It breaks our commercial systems, as US computer products and services are no longer trusted worldwide. It breaks our technical systems, as the very protocols of the Internet become untrusted. And it breaks our social systems; the loss of privacy, freedom, and liberty is much more damaging to our society than the occasional act of random violence.
And finally, these systems are susceptible to abuse. This is not just a hypothetical problem. Recent history illustrates many episodes where this information was, or would have been, abused: Hoover and his FBI spying, McCarthy, Martin Luther King Jr. and the civil rights movement, anti-war Vietnam protesters, and—more recently—the Occupy movement. Outside the US, there are even more extreme examples. Building the surveillance state makes it too easy for people and organizations to slip over the line into abuse.
It’s not just domestic abuse we have to worry about; it’s the rest of the world, too. The more we choose to eavesdrop on the Internet and other communications technologies, the less we are secure from eavesdropping by others. Our choice isn’t between a digital world where the NSA can eavesdrop and one where the NSA is prevented from eavesdropping; it’s between a digital world that is vulnerable to all attackers, and one that is secure for all users.
Fixing this problem is going to be hard. We are long past the point where simple legal interventions can help. The bill in Congress to limit NSA surveillance won’t actually do much to limit NSA surveillance. Maybe the NSA will figure out an interpretation of the law that will allow it to do what it wants anyway. Maybe it’ll do it another way, using another justification. Maybe the FBI will do it and give it a copy. And when asked, it’ll lie about it.
NSA-level surveillance is like the Maginot Line was in the years before World War II: ineffective and wasteful. We need to openly disclose what surveillance we have been doing, and the known insecurities that make it possible. We need to work toward security, even if other countries like China continue to use the Internet as a giant surveillance platform. We need to build a coalition of free-world nations dedicated to a secure global Internet, and we need to continually push back against bad actors—both state and non-state—that work against that goal.
Securing the Internet requires both laws and technology. It requires Internet technology that secures data wherever it is and however it travels. It requires broad laws that put security ahead of both domestic and international surveillance. It requires additional technology to enforce those laws, and a worldwide enforcement regime to deal with bad actors. It’s not easy, and has all the problems that other international issues have: nuclear, chemical, and biological weapon non-proliferation; small arms trafficking; human trafficking; money laundering; intellectual property. Global information security and anti-surveillance needs to join those difficult global problems, so we can start making progress.
The President’s Review Group recommendations are largely positive, but they don’t go nearly far enough. We need to recognize that security is more important than surveillance, and work towards that goal.
This essay previously appeared on TheAtlantic.com.
Recent DNI declassifications:
President’s Review Group report:
NSA hiding behind particular programs:
All the Snowden documents released so far:
Other law-enforcement organizations that engage in national surveillance:
The limitations of intelligence:
The NSA’s Quixotic goal:
NSA spying on online gaming worlds:
No evidence that NSA bulk surveillance makes us safer:
Alexander’s 54 terrorist plots:
Alexander’s 13 terrorist plots:
Alexander’s one remaining plot:
Arguments that NSA surveillance could have stopped 9/11:
NSA surveillance is ineffective:
U.S. intelligence budgets:
Lack of Congressional oversight:
Security is more important than surveillance:
One of the top secret NSA documents published by Der Spiegel is a 50-page catalog of “implants” from the NSA’s Tailored Access Group. Because the individual implants are so varied and we saw so many at once, most of them were never discussed in the security community. (Also, the pages were images, which makes them harder to index and search.) To rectify this, I am publishing an exploit a day on my blog.
In the blog comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.
“DEITYBOUNCE provides software application persistence on Dell PowerEdge servers by exploiting the motherboard BIOS and utilizing System Management Mode (SMM) to gain periodic execution while the Operating System loads.”
“IRONCHEF provides access persistence to target systems by exploiting the motherboard BIOS and utilizing System Management Mode (SMM) to communicate with a hardware implany that provides two-way RF communication.” It works on the HP Proliant 380DL G5 server.
“FEEDTROUGH is a persistence technique for two software implants, DNT’s BANANAGLEE and CES’s ZESTYLEAK used against Juniper Netscreen firewalls.”
“GOURMETTROUGH is a user configurable implant for certain Juniper firewalls. It persists DNT’s BANANAGLEE implant across reboots and OS upgrades. For some platforms, it supports a minimal implant with beaconing for OS’s unsupported by BANANAGLEE.”
“The HALLUXWATER Persistence Back Door implant is installed on a target Huawei Eudemon firewall as a boot ROM upgrade. When the target reboots, the PBD installer software will find the needed patch points and install the back door in the inbound packet processing routine.”
“JETPLOW is a firmware persistence implant for Cisco PIX Series and ASA (Adaptive Security Appliance) firewalls. It persists DNT’s BANANAGLEE software implant. JETPLOW also has a persistent back-door capability.”
“SOUFFLETROUGH is a BIOS persistence implant for Juniper SSG 500 and SSG 300 firewalls. It persists DNT’s BANANAGLEE software implant. SOUFFLETROUGH also has an advanced persistent back-door capability.”
“HEADWATER is a Persistent Backdoor (PDB) software implant for selected Huawei routers. The implant will enable covert functions to be remotely executed within the router via an Internet connection.”
“SCHOOLMONTANA provides persistence for DNT implants. The DNT implant will survive an upgrade or replacement of the operating system—including physically replacing the router’s compact flash card.”
A U.S. government employee e-mailed me, asking me not to post these on my blog. The government has a weird policy that exposed secrets are still secret, and government employees without clearances are prohibited from reading the classified paragraphs. I’ve heard this before. Basically, before exposure only people with a TOP SECRET clearance could read these paragraphs. After exposure, only people without any clearance at all can read these paragraphs. No, it doesn’t make any sense.
Eldo Kim sent an e-mail bomb threat to Harvard so he could skip a final exam. (It’s just a coincidence that I was on the Harvard campus that day.) Even though he used an anonymous account and Tor, the FBI identified him. Reading the criminal complaint, it seems that the FBI got itself a list of Harvard users that accessed the Tor network, and went through them one by one to find the one who sent the threat.
This is one of the problems of using a rare security tool. The very thing that gives you plausible deniability also makes you the most likely suspect. The FBI didn’t have to break Tor; they just used conventional police mechanisms to get Kim to confess.
Tor didn’t break; Kim did.
This story is about how at least two professional online poker players had their hotel rooms broken into and their computers infected with malware. I agree with the conclusion: “So, what’s the moral of the story? If you have a laptop that is used to move large amounts of money, take good care of it. Lock the keyboard when you step away. Put it in a safe when you’re not around it, and encrypt the disk to prevent off-line access. Don’t surf the web with it (use another laptop/device for that, they’re relatively cheap). This advice is true whether you’re a poker pro using a laptop for gaming or a business controller in a large company using the computer for wiring a large amount of funds.” Cheap laptops are very cheap, especially if you buy old models off the remainder tables at big box stores. There’s no reason not to have special purpose machines.
An interesting research paper documents a “honeymoon effect” when it comes to software and vulnerabilities: attackers are more likely to find vulnerabilities in older and more familiar code. It’s a few years old, but I haven’t seen it before now. The paper is by Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith: “Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities,” Annual Computer Security Applications Conference 2010.
Acoustic cryptanalysis “can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts.”
Two long blog posts on the NSA. The first is about RSA entering into a secret agreement with the NSA to make the backdoored DUAL_EC_PRNG the default random number generator in their BSAFE toolkit. The real story here is how the NSA has corroded the trust on the Internet.
The second is about the NSA Tailored Access Operations (TAO) group and their capabilities, based on new NSA top secret documents released by Der Spiegel. Jacob Appelbaum did a great job reporting on this stuff.
If you read nothing else from this issue of Crypto-Gram, read those two links.
Here is the list of NSA documents from the Der Spiegel article:
Fascinating report from Citizen Lab on the use of malware in the current Syrian conflict.
Amusing Christmas comic.
“Talking to Vula” is the story of a 1980s secret communications channel between black South African leaders and others living in exile in the UK. The system used encrypted text encoded into DTMF “touch tones” and transmitted from pay phones.
Joseph Stiglitz has an excellent essay on the value of trust, and the lack of it in today’s society.
It has amazed me that the NSA doesn’t seem to do any cost/benefit analyses on any of its surveillance programs. This seems particularly important for bulk surveillance programs, as they have significant costs aside from the obvious monetary costs. In this paper, John Mueller and Mark G. Stewart have done the analysis on one of these programs. Worth reading.
Matt Blaze on TAO’s methods, pointing out that targeted surveillance is better than bulk surveillance.
This is important. As scarily impressive as TAO’s implant catalog is, it’s targeted. We can argue about how it should be targeted—who counts as a “bad guy” and who doesn’t—but it’s much better than the NSA’s collecting cell phone location data on everyone on the planet. The more we can deny the NSA the ability to do broad wholesale surveillance on everyone, and force them to do targeted surveillance in individuals and organizations, the safer we all are.
The failure of privacy notices and consumer choice.
We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself—as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.
It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard—if not impossible—to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure—publishing vulnerabilities to force companies to issue patches quicker—and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.
But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.
If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them—including some of the most popular and common brands.
To understand the problem, you need to understand the embedded systems market.
Typically, these systems are powered by specialized computer chips made by companies such as Broadcom, Qualcomm, and Marvell. These chips are cheap, and the profit margins slim. Aside from price, the way the manufacturers differentiate themselves from each other is by features and bandwidth. They typically put a version of the Linux operating system onto the chips, as well as a bunch of other open-source and proprietary components and drivers. They do as little engineering as possible before shipping, and there’s little incentive to update their “board support package” until absolutely necessary.
The system manufacturers—usually original device manufacturers (ODMs) who often don’t get their brand name on the finished product—choose a chip based on price and features, and then build a router, server, or whatever. They don’t do a lot of engineering, either. The brand-name company on the box may add a user interface and maybe some new features, make sure everything works, and they’re done, too.
The problem with this process is that no one entity has any incentive, expertise, or even ability to patch the software once it’s shipped. The chip manufacturer is busy shipping the next version of the chip, and the ODM is busy upgrading its product to work with this next chip. Maintaining the older chips and products just isn’t a priority.
And the software is old, even when the device is new. For example, one survey of common home routers found that the software components were four to five years older than the device. The minimum age of the Linux operating system was four years. The minimum age of the Samba file system software: six years. They may have had all the security patches applied, but most likely not. No one has that job. Some of the components are so old that they’re no longer being patched. This patching is especially important because security vulnerabilities are found “more easily” as systems age.
To make matters worse, it’s often impossible to patch the software or upgrade the components to the latest version. Often, the complete source code isn’t available. Yes, they’ll have the source code to Linux and any other open-source components. But many of the device drivers and other components are just “binary blobs”—no source code at all. That’s the most pernicious part of the problem: No one can possibly patch code that’s just binary.
Even when a patch is possible, it’s rarely applied. Users usually have to manually download and install relevant patches. But since users never get alerted about security updates, and don’t have the expertise to manually administer these devices, it doesn’t happen. Sometimes the ISPs have the ability to remotely patch routers and modems, but this is also rare.
The result is hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years.
Hackers are starting to notice. Malware DNS Changer attacks home routers as well as computers. In Brazil, 4.5 million DSL routers were compromised for purposes of financial fraud. Last month, Symantec reported on a Linux worm that targets routers, cameras, and other embedded devices.
This is only the beginning. All it will take is some easy-to-use hacker tools for the script kiddies to get into the game.
And the Internet of Things will only make this problem worse, as the Internet—as well as our homes and bodies—becomes flooded with new embedded devices that will be equally poorly maintained and unpatchable. But routers and modems pose a particular problem, because they’re: (1) between users and the Internet, so turning them off is increasingly not an option; (2) more powerful and more general in function than other embedded devices; (3) the one 24/7 computing device in the house, and are a natural place for lots of new features.
We were here before with personal computers, and we fixed the problem. But disclosing vulnerabilities in an effort to force vendors to fix the problem won’t work the same way as with embedded systems. The last time, the problem was computers, ones mostly not connected to the Internet, and slow-spreading viruses. The scale is different today: more devices, more vulnerability, viruses spreading faster on the Internet, and less technical expertise on both the vendor and the user sides. Plus vulnerabilities that are impossible to patch.
Combine full function with lack of updates, add in a pernicious market dynamic that has inhibited updates and prevented anyone else from updating, and we have an incipient disaster in front of us. It’s just a matter of when.
We simply have to fix this. We have to put pressure on embedded system vendors to design their systems better. We need open-source driver software—no more binary blobs!—so third-party vendors and ISPs can provide security tools and software updates for as long as the device is in use. We need automatic update mechanisms to ensure they get installed.
The economic incentives point to large ISPs as the driver for change. Whether they’re to blame or not, the ISPs are the ones who get the service calls for crashes. They often have to send users new hardware because it’s the only way to update a router or modem, and that can easily cost a year’s worth of profit from that customer. This problem is only going to get worse, and more expensive. Paying the cost up front for better embedded systems is much cheaper than paying the costs of the resultant security disasters.
This essay originally appeared on Wired.com.
Security vulnerabilities of older systems:
Two essays that debunk the “NSA surveillance could have stopped 9/11” myth:
I left BT at the end of December.
For decades, I’ve said that good security is a combination of protection, detection, and response. In 1999, when I formed Counterpane Internet Security, I focused the company on what was then the nascent area of detection. Since then, there have been many products and services that focus on detection, and it’s a huge part of the information security industry. Now, it’s time for response. While there are many companies that offer services to aid in incident response—mitigation, forensics, recovery, compliance—there are no comprehensive products in this area.
Well, almost none. Co3 Systems provides a coordination system for incident response. I think of it as a social networking site for incident response, though the company doesn’t use this term. The idea is that the system generates your incident response plan on installation, and when something happens, automatically executes it. It collects information about the incident, assigns and tracks tasks, and logs everything you do. It links you with information you might need, companies you might want to talk to, and regulations you might be required to comply with. And it logs everything, so you can demonstrate that you followed your response plan and thus the law—or see how and where you fell short.
Years ago, attacks were both less frequent and less serious, and compliance requirements were more modest. But today, companies get breached all the time, and regulatory requirements are complicated—and getting more so all the time. Ad hoc incident response isn’t enough anymore. There are lots of things you need to do when you’re attacked, both to secure your network from the attackers and to secure your company from litigation.
The problem with any emergency response plan is that you only need it in an emergency. Emergencies are both complicated and stressful, and it’s easy for things to fall through the cracks. It’s critical to have something—a system, a checklist, even a person—that tracks everything and makes sure that everything that has to get done is.
Co3 Systems is great in an emergency, but of course you really want to have installed and configured it *before* the emergency.
It will also serve you better if you use it regularly. Co3 Systems is designed to be valuable for all incident response, both the mundane and the critical. The system can record and assess everything that appears abnormal. The incident response plans it generates make it easy, and the intelligence feeds make it useful. If Co3 Systems is already in place, when something turns out to be a real incident, it’s easy to escalate it to the next level, and you’ll be using tools you’re already familiar with.
Co3 Systems works either from a private cloud or on your network. I think the cloud makes more sense; you don’t want to coordinate incident response from the network that is under attack. And it’s constantly getting better as more partner companies integrate their information feeds and best practices. The company has launched some of these partnerships already, and there are some major names soon to be announced.
Today I am joining Co3 Systems as its Chief Technology Officer. I’ve been on the company’s advisory board for about a year, and was an informal adviser to CEO John Bruce before that. John and I worked together at Counterpane in the early 2000s, and we both think this is a natural extension to what we tried to build there. I also know CMO Ted Julian from his days at @Stake. Together, we’re going to build *the* incident response product.
I’m really excited about this—and the fact that the company headquarters are just three T stops inbound to Harvard and the Berkman Center makes it even more perfect.
I have an official Twitter feed of my blog; it’s @schneierblog. There’s also an unofficial feed at @Bruce_Schneier. I have nothing to do with that one.
I wouldn’t mind the unofficial feed—if people are reading my blog, who cares—except that it isn’t working right, and hasn’t been for some time. It publishes some posts weeks late and skips others entirely. I’m only hoping that this one will show up there.
It’s also kind of annoying that @Bruce_Schneier keeps following people, who think it’s me. It’s not; I never log in to Twitter and I don’t follow anyone there.
So if you want to read my blog on Twitter, please make sure you’re following @schneierblog. And if you are the person who runs the @Bruce_Schneier account—if anyone is even running it anymore—please e-mail me at the address on my Contact page. I’d rather see it fixed than shut down, but better for it to be shut down than continue in its broken state.
My contact page:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.
Copyright (c) 2014 by Bruce Schneier.