March 15, 2014
by Bruce Schneier
CTO, Co3 Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1403.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Breaking Up the NSA
- Computer Network Exploitation vs. Computer Network Attack
- Metadata = Surveillance
- NSA News
- Surveillance by Algorithm
- NSA Exploit of the Day
- Who Should Store NSA Surveillance Data
- Schneier News
- Co3 Systems News
- The Security of the Fortuna PRNG
- RCS Spyware and Citizen Lab
- Choosing Secure Passwords
The NSA has become too big and too powerful. What was supposed to be a single agency with a dual mission — protecting the security of U.S. communications and eavesdropping on the communications of our enemies — has become unbalanced in the post-Cold War, all-terrorism-all-the-time era.
Putting the U.S. Cyber Command, the military’s cyberwar wing, in the same location and under the same commander, expanded the NSA’s power. The result is an agency that prioritizes intelligence gathering over security, and that’s increasingly putting us all at risk. It’s time we thought about breaking up the National Security Agency.
Broadly speaking, three types of NSA surveillance programs were exposed by the documents released by Edward Snowden. And while the media tends to lump them together, understanding their differences is critical to understanding how to divide up the NSA’s missions.
The first is targeted surveillance.
This is best illustrated by the work of the NSA’s Tailored Access Operations (TAO) group, including its catalog of hardware and software “implants” designed to be surreptitiously installed onto the enemy’s computers. This sort of thing represents the best of the NSA and is exactly what we want it to do. That the United States has these capabilities, as scary as they might be, is cause for gratification.
The second is bulk surveillance, the NSA’s collection of everything it can obtain on every communications channel to which it can get access. This includes things such as the NSA’s bulk collection of call records, location data, e-mail messages and text messages.
This is where the NSA overreaches: collecting data on innocent Americans either incidentally or deliberately, and data on foreign citizens indiscriminately. It doesn’t make us any safer, and it is liable to be abused. Even the director of national intelligence, James Clapper, acknowledged that the collection and storage of data was kept a secret for too long.
The third is the deliberate sabotaging of security. The primary example we have of this is the NSA’s BULLRUN program, which tries to “insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communication devices.” This is the worst of the NSA’s excesses, because it destroys our trust in the Internet, weakens the security all of us rely on and makes us more vulnerable to attackers worldwide.
That’s the three: good, bad, very bad. Reorganizing the U.S. intelligence apparatus so it concentrates on our enemies requires breaking up the NSA along those functions.
First, TAO and its targeted surveillance mission should be moved under the control of U.S. Cyber Command, and Cyber Command should be completely separated from the NSA. Actively attacking enemy networks is an offensive military operation, and should be part of an offensive military unit.
Whatever rules of engagement Cyber Command operates under should apply equally to active operations such as sabotaging the Natanz nuclear enrichment facility in Iran and hacking a Belgian telephone company. If we’re going to attack the infrastructure of a foreign nation, let it be a clear military operation.
Second, all surveillance of Americans should be moved to the FBI.
The FBI is charged with counterterrorism in the United States, and it needs to play that role. Any operations focused against U.S. citizens need to be subject to U.S. law, and the FBI is the best place to apply that law. That the NSA can, in the view of many, do an end-run around congressional oversight, legal due process and domestic laws is an affront to our Constitution and a danger to our society. The NSA’s mission should be focused outside the United States — for real, not just for show.
And third, the remainder of the NSA needs to be rebalanced so COMSEC (communications security) has priority over SIGINT (signals intelligence). Instead of working to deliberately weaken security for everyone, the NSA should work to improve security for everyone.
Computer and network security is hard, and we need the NSA’s expertise to secure our social networks, business systems, computers, phones and critical infrastructure. Just recall the recent incidents of hacked accounts — from Target to Kickstarter. What once seemed occasional now seems routine. Any NSA work to secure our networks and infrastructure can be done openly — no secrecy required.
This is a radical solution, but the NSA’s many harms require radical thinking. It’s not far off from what the President’s Review Group on Intelligence and Communications Technologies, charged with evaluating the NSA’s current programs, recommended. Its 24th recommendation was to put the NSA and U.S. Cyber Command under different generals, and the 29th recommendation was to put encryption ahead of exploitation.
I have no illusions that anything like this will happen anytime soon, but it might be the only way to tame the enormous beast that the NSA has become.
This essay previously appeared on CNN.com.
The NSA is putting us at risk:
TAO catalog of hardware and software “implants”:
NSA’s bulk collection of call records:
NSA’s bulk collection of location data:
NSA’s bulk collection of e-mail messages:
NSA’s bulk collection of text messages:
Hacking the Natanz nuclear enrichment facility:
Hacking the Belgian telephone company:
Hacker News thread:
Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber-attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it’s a peacetime activity, and that everyone does it.
The reality is somewhere in the middle, and the problem is that our intuitions are based on history.
Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn’t passive anymore. It’s not the electronic equivalent of sitting close to someone and overhearing a conversation. It’s not passively monitoring a communications circuit. It’s more likely to involve actively breaking into an adversary’s computer network — be it Chinese, Brazilian, or Belgian — and installing malicious software designed to take over that network.
In other words, it’s hacking. Cyber-espionage is a form of cyber-attack. It’s an offensive action. It violates the sovereignty of another country, and we’re doing it with far too little consideration of its diplomatic and geopolitical costs.
The abbreviation-happy U.S. military has two related terms for what it does in cyberspace. CNE stands for “computer network exploitation.” That’s spying. CNA stands for “computer network attack.” That includes actions designed to destroy or otherwise incapacitate enemy networks. That’s — among other things — sabotage.
CNE and CNA are not solely in the purview of the U.S.; everyone does it. We know that other countries are building their offensive cyberwar capabilities. We have discovered sophisticated surveillance networks from other countries with names like GhostNet, Red October, The Mask. We don’t know who was behind them — these networks are very difficult to trace back to their source — but we suspect China, Russia, and Spain, respectively. We recently learned of a hacking tool called RCS that’s used by 21 governments: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.
When the Chinese company Huawei tried to sell networking equipment to the U.S., the government considered that equipment a “national security threat,” rightly fearing that those switches were backdoored to allow the Chinese government both to eavesdrop and attack US networks. Now we know that the NSA is doing the exact same thing to American-made equipment sold in China, as well as to those very same Huawei switches.
The problem is that, from the point of view of the object of an attack, CNE and CNA look the same as each other, except for the end result. Today’s surveillance systems involve breaking into the computers and installing malware, just as cybercriminals do when they want your money. And just like Stuxnet: the U.S./Israeli cyberweapon that disabled the Natanz nuclear facility in Iran in 2010.
This is what Microsoft’s General Counsel Brad Smith meant when he said: “Indeed, government snooping potentially now constitutes an ‘advanced persistent threat,’ alongside sophisticated malware and cyber attacks.”
When the Chinese penetrate U.S. computer networks, which they do with alarming regularity, we don’t really know what they’re doing. Are they modifying our hardware and software to just eavesdrop, or are they leaving “logic bombs” that could be triggered to do real damage at some future time? It can be impossible to tell. As a 2011 EU cybersecurity policy document stated (page 7): “…technically speaking, CNA requires CNE to be effective. In other words, what may be preparations for cyberwarfare can well be cyberespionage initially or simply be disguised as such.
We can’t tell the intentions of the Chinese, and they can’t tell ours, either.
Much of the current debate in the U.S. is over what the NSA should be allowed to do, and whether limiting the NSA somehow empowers other governments. That’s the wrong debate. We don’t get to choose between a world where the NSA spies and one where the Chinese spy. Our choice is between a world where our information infrastructure is vulnerable to all attackers or secure for all users.
As long as cyber-espionage equals cyber-attack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks. True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows — including our own — from both eavesdropping and more damaging attacks. We would be protecting our information flows from governments, nonstate actors, and criminals. We would be making the world safer.
Offensive military operations in cyberspace, be they CNE or CNA, should be the purview of the military. In the U.S., that’s CyberCommand. Such operations should be recognized as offensive military actions, and should be approved at the highest levels of the executive branch, and be subject to the same international law standards that govern acts of war in the offline world.
If we’re going to attack another country’s electronic infrastructure, we should treat it like any other attack on a foreign country. It’s no longer just espionage, it’s a cyber-attack.
This essay previously appeared on TheAtlantic.com.
Obama’s speech on NSA surveillance:
RCS hacking tool:
Huawei as national security threat:
Brad Smith quote:
EU policy document:
How we should reorganize the NSA:
Ever since reporters began publishing stories about NSA activities, based on documents provided by Edward Snowden, we’ve been repeatedly assured by government officials that it’s “only metadata.” This might fool the average person, but it shouldn’t fool those of us in the security field. Metadata equals surveillance data, and collecting metadata on people means putting them under surveillance.
An easy thought experiment demonstrates this. Imagine that you hired a private detective to eavesdrop on a subject. That detective would plant a bug in that subject’s home, office, and car. He would eavesdrop on his computer. He would listen in on that subject’s conversations, both face to face and remotely, and you would get a report on what was said in those conversations. (This is what President Obama repeatedly reassures us isn’t happening with our phone calls. But am I the only one who finds it suspicious that he always uses very specific words? “The NSA is not listening in on your phone calls.” This leaves open the possibility that the NSA is recording, transcribing, and analyzing your phone calls — and very occasionally reading them. This is far more likely to be true, and something a pedantically minded president could claim he wasn’t lying about.)
Now imagine that you asked that same private detective to put a subject under constant surveillance. You would get a different report, one that included things like where he went, what he did, who he spoke to — and for how long — who he wrote to, what he read, and what he purchased. This is all metadata, data we know the NSA is collecting. So when the president says that it’s only metadata, what you should really hear is that we’re all under constant and ubiquitous surveillance.
What’s missing from much of the discussion about the NSA’s activities is what they’re doing with all of this surveillance data. The newspapers focus on what’s being collected, not on how it’s being analyzed — with the singular exception of the Washington Post story on cell phone location collection. By their nature, cell phones are tracking devices. For a network to connect calls, it needs to know which cell the phone is located in. In an urban area, this narrows a phone’s location to a few blocks. GPS data, transmitted across the network by far too many apps, locates a phone even more precisely. Collecting this data in bulk, which is what the NSA does, effectively puts everyone under physical surveillance.
This is new. Police could always tail a suspect, but now they can tail everyone — suspect or not. And once they’re able to do that, they can perform analyses that weren’t otherwise possible. The Washington Post reported two examples. One, you can look for pairs of phones that move toward each other, turn off for an hour or so, and then turn themselves back on while moving away from each other. In other words, you can look for secret meetings. Two, you can locate specific phones of interest and then look for other phones that move geographically in synch with those phones. In other words, you can look for someone physically tailing someone else. I’m sure there are dozens of other clever analyses you can perform with a database like this. We need more researchers thinking about the possibilities. I can assure you that the world’s intelligence agencies are conducting this research.
How could a secret police use other surveillance databases: everyone’s calling records, everyone’s purchasing habits, everyone’s browsing history, everyone’s Facebook and Twitter history? How could these databases be combined in interesting ways? We need more research on the emergent properties of ubiquitous electronic surveillance.
We can’t protect against what we don’t understand. And whatever you think of the NSA or the other 5-Eyes countries, these techniques aren’t solely theirs. They’re being used by many countries to intimidate and control their populations. In a few years, they’ll be used by corporations for psychological manipulation — persuasion or advertising — and even sooner by cybercriminals for more illicit purposes.
This essay previously appeared in the March/April 2014 issue of IEEE Security and Privacy.
Nice profile of Brian Krebs, cybersecurity journalist:
There’s an interesting project to detect false rumors on the Internet.
I have no idea how well it will work, or even whether it will work, but I like research in this direction. Of the three primary Internet mechanisms for social control, surveillance and censorship have received a lot more attention than propaganda. Anything that can potentially detect propaganda is a good thing.
A new document gives a good overview of how NIST develops cryptographic standards and guidelines. It’s still in draft, and comments are appreciated.
Given that NIST has been tainted by the NSA’s actions to subvert cryptographic standards and protocols, more transparency in this process is appreciated. I think NIST is doing a fine job and that it’s not shilling for the NSA, but it needs to do more to convince the world of that.
DDoSing a cell phone network:
The attack involves cloning SIM cards, then making multiple calls from different handsets in different locations with the same SIM card. This confuses the network into thinking that the same phone is in multiple places at once. (Note that this has not been tested in the field, but there seems no reason why it wouldn’t work.) There’s a lot of insecurity in the fact that cell phones and towers largely trust each other. The NSA and FBI use that fact for eavesdropping, and here it’s used for a denial-of-service attack.
Stun gun release identifying markers when they are fired.
Was the iOS SSL flaw deliberate?
The Voynich Manuscript has been partially decoded. This seems not to be a hoax. And the manuscript seems not to be a hoax, either.
Here’s a new biometric “Recognizable body odor patterns remain constant enough over time to allow people to be identified with an accuracy rate of 85 percent.” Not yet good enough for most applications, but presumably this kind of thing can only get more accurate.
The Stasi used to preserve individual odor samples for tracking:
Researchers have demonstrated the first airborne Wi-Fi computer virus.
There seems to be an epidemic of computer-generated nonsense academic papers.
Insurance companies are pushing for more cybersecurity.
Cory Doctorow argues that computer security is analogous to public health:
There’s a new (overly breathless) article on the NSA’s QUANTUM program, including a bunch of new source documents.
Of particular note is this page listing a variety of QUANTUM programs. Note that QUANTUMCOOKIE, “which forces users to divulge stored cookies,” is not on this list.
Previous links about QUANTUM:
Also released this week was the STELLARWIND classification guide, in conjunction with a New York Times article on how the FISA court expanded domestic surveillance.
Here’s the previous story about STELLARWIND, from the Washington Post.
See also this NSA document:
Both stories are based on Snowden documents.
Is it only me, or does anyone else wonder why a court with the word “foreign” in its name would rule on domestic intelligence collection?
These four slides, released this week, describe one process the NSA has for eavesdropping on VPN and VoIP traffic. There’s a lot of information on these slides, though it’s a veritable sea of code names. No details as to how the NSA decrypts those ESP — “Encapsulating Security Payload” — packets, although there are some clues in the form of code names in the slides.
Nicholas Weaver has an excellent essay explaining how QUANTUM works:
Increasingly, we are watched not by people but by algorithms. Amazon and Netflix track the books we buy and the movies we stream, and suggest other books and movies based on our habits. Google and Facebook watch what we do and what we say, and show us advertisements based on our behavior. Google even modifies our web search results based on our previous behavior. Smartphone navigation apps watch us as we drive, and update suggested route information based on traffic congestion. And the National Security Agency, of course, monitors our phone calls, emails and locations, then uses that information to try to identify terrorists.
Documents provided by Edward Snowden and revealed by the Guardian today show that the UK spy agency GHCQ, with help from the NSA, has been collecting millions of webcam images from innocent Yahoo users. And that speaks to a key distinction in the age of algorithmic surveillance: is it really okay for a computer to monitor you online, and for that data collection and analysis only to count as a potential privacy invasion when a person sees it? I say it’s not, and the latest Snowden leaks only make more clear how important this distinction is.
The robots-vs-spies divide is especially important as we decide what to do about NSA and GCHQ surveillance. The spy community and the Justice Department have reported back early on President Obama’s request for changing how the NSA “collects” your data, but the potential reforms — FBI monitoring, holding on to your phone records and more — still largely depend on what the meaning of “collects” is.
Indeed, ever since Snowden provided reporters with a trove of top secret documents, we’ve been subjected to all sorts of NSA word games. And the word “collect” has a very special definition, according to the Department of Defense (DoD). A 1982 procedures manual (page 15) says: “information shall be considered as ‘collected’ only when it has been received for use by an employee of a DoD intelligence component in the course of his official duties.” And “data acquired by electronic means is ‘collected’ only when it has been processed into intelligible form.”
Director of National Intelligence James Clapper likened the NSA’s accumulation of data to a library. All those books are stored on the shelves, but very few are actually read. “So the task for us in the interest of preserving security and preserving civil liberties and privacy,” says Clapper, “is to be as precise as we possibly can be when we go in that library and look for the books that we need to open up and actually read.” Only when an individual book is read does it count as “collection,” in government parlance.
So, think of that friend of yours who has thousands of books in his house. According to the NSA, he’s not actually “collecting” books. He’s doing something else with them, and the only books he can claim to have “collected” are the ones he’s actually read.
This is why Clapper claims — to this day — that he didn’t lie in a Senate hearing when he replied “no” to this question: “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”
If the NSA collects — I’m using the everyday definition of the word here — all of the contents of everyone’s e-mail, it doesn’t count it as being collected in NSA terms until someone reads it. And if it collects — I’m sorry, but that’s really the correct word — everyone’s phone records or location information and stores it in an enormous database, that doesn’t count as being collected — NSA definition — until someone looks at it. If the agency uses computers to search those emails for keywords, or correlates that location information for relationships between people, it doesn’t count as collection, either. Only when those computers spit out a particular person has the data — in NSA terms — actually been collected.
If the modern spy dictionary has you confused, maybe dogs can help us understand why this legal workaround, by big tech companies and the government alike, is still a serious invasion of privacy.
Back when Gmail was introduced, this was Google’s defense, too, about its context-sensitive advertising. Googles computers examine each individual email and insert an advertisement nearby, related to the contents of your email. But no person at Google reads any Gmail messages; only a computer does. In the words of one Google executive: “Worrying about a computer reading your email is like worrying about your dog seeing you naked.”
But now that we have an example of a spy agency seeing people naked — there are a surprising number of sexually explicit images in the newly revealed Yahoo image collection — we can more viscerally understand the difference.
To wit: when you’re watched by a dog, you know that what you’re doing will go no further than the dog. The dog can’t remember the details of what you’ve done. The dog can’t tell anyone else. When you’re watched by a computer, that’s not true. You might be told that the computer isn’t saving a copy of the video, but you have no assurance that that’s true. You might be told that the computer won’t alert a person if it perceives something of interest, but you can’t know if that’s true. You do know that the computer is making decisions based on what it receives, and you have no way of confirming that no human being will access that decision.
When a computer stores your data, there’s always a risk of exposure. There’s the risk of accidental exposure, when some hacker or criminal breaks in and steals the data. There’s the risk of purposeful exposure, when the organization that has your data uses it in some manner. And there’s the risk that another organization will demand access to the data. The FBI can serve a National Security Letter on Google, demanding details on your email and browsing habits. There isn’t a court order in the world that can get that information out of your dog.
Of course, any time we’re judged by algorithms, there’s the potential for false positives. You are already familiar with this; just think of all the irrelevant advertisements you’ve been shown on the Internet, based on some algorithm misinterpreting your interests. In advertising, that’s okay. It’s annoying, but there’s little actual harm, and you were busy reading your email anyway, right? But that harm increases as the accompanying judgments become more important: our credit ratings depend on algorithms; how we’re treated at airport security does, too. And most alarming of all, drone targeting is partly based on algorithmic surveillance.
The primary difference between a computer and a dog is that the computer interacts with other people in the real world, and the dog does not. If someone could isolate the computer in the same way a dog is isolated, we wouldn’t have any reason to worry about algorithms crawling around in our data. But we can’t. Computer algorithms are intimately tied to people. And when we think of computer algorithms surveilling us or analyzing our personal data, we need to think about the people behind those algorithms. Whether or not anyone actually looks at our data, the very fact that they even could is what makes it surveillance.
This is why Yahoo called GCHQ’s webcam-image collection “a whole new level of violation of our users’ privacy.” This is why we’re not mollified by attempts from the UK equivalent of the NSA to apply facial recognition algorithms to the data, or to limit how many people viewed the sexually explicit images. This is why Google’s eavesdropping is different than a dog’s eavesdropping, and why the NSA’s definition of “collect” makes no sense whatsoever.
This essay previously appeared on theguardian.com:
GCHQ collecting webcam images: http://www.theguardian.com/world/2014/feb/27/…
NSA word games:
A 1982 procedures manual:
Ads are annoying:
Targeting drones by algorithm:
One of the top-secret NSA documents published by Der Spiegel was a 50-page catalog of “implants” from the NSA’s Tailored Access Group. Because the individual implants are so varied and we saw so many at once, most of them were never discussed in the security community. (Also, the pages were images, which makes them harder to index and search.) To rectify this, I’ve been publishing an exploit a day on my blog.
PICASSO is a modified GSM (target) handset that collects user data, location information and room audio. Command and data exfil is done from a laptop and regular phone via SMS (Short Messaging Service), without alerting the target.
TOTECHASER is a Windows CE implant targeting the Thuraya 2520 handset. The Thuraya is a dual mode phone that can operate either in SAT or GSM modes. The phone also supports a GPRS data connection for Web browsing, e-mail, and MMS messages. The initial software implant capabilities include providing GPS and GSM geo-location information. Call log, contact list, and other user information can also be retrieved from the phone.
TOTEGHOSTLY 2.0 is a software implant for the Windows Mobile operating system that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device, SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc.
CANDYGRAM mimics GSM cell tower of a target network. Capable of operations at 900, 1800, or 1900 MHz. Whenever a target handset enters the CANDYGRAM base station’s area of influence, the system sends out an SMS through the external network to registered watch phones.
CROSSBEAM is a reusable CHIMNEYPOOL-compliant GSM communications module capable of collecting and compressing voice data. CROSSBEAM can receive GSM voice, record voice data, and transmit the received information via connected modules or 4 different GSM data modes (GPRS, Circuit Switched Data, Data Over Voice, and DTMF) back to a secure facility.
CYCLONE Hx9 is a EGSM (900MGz) macro-class Network-In-a-Box (NIB) system.
EBSR is a multi-purpose, Pico class, tri-band active GSM base station with internal 802.11/GPS/handset capability.
ENTOURAGE is a direction-finding application operating on the HOLLOWPOINT platform. The system is capable of providing line of bearing for GSM/UMTS/CDMA2000/FRS signals.
GENESIS is a commercial GSM handset that has been modified to include a Software Defined Radio (SDR) and additional system memory. The internal SDR allows a witting user to covertly perform network surveys, record RF spectrum, or perform handset location in hostile environments.
NEBULA is a multi-Protocol macro-class Network-In-a-Box (NIB) system. Leverages the existing Typhon GUI and supports GSM, UMTS, CDMA2000 applications. LTE capability currently under development.
TYPHON HX is a Base Station Router — Network-In-a-Box (NIB) supporting GSM bands 850/900/1800/1900 and associated full GSM signaling and call control.
WATERWITCH is a hand held finishing tool used for geolocating targeted handsets in the field.
COTTONMOUTH-I (CM-1) is a Universal Serial Bus (USB) hardware implant which will provide a wireless bridge into a target network as well as the ability to load exploit software onto target PCs. CM-I will provide air-gap bridging, software persistence capability, “in-field” re-programmability, and covert communications with a host software implant over the USB. The RF link will enable command and data infiltration and exfiltration. CM-I will also communicate with Data Network Technologies (DNT) software (STRAITBIZARRE) through a covert channel implemented on the USB, using this communication channel to pass commands and data between hardware and software implants.
COTTONMOUTH-II (CM-II) is a Universal Serial Bus (USB) hardware Host Tap, which will provide a covert link over USB link into a target network. CM-II is intended to be operate with a long haul relay subsystem, which is co-located within the target equipment.
COTTONMOUTH-III (CM-III) is a Universal Serial Bus (USB) hardware implant, which will provide a wireless bridge into a target network as well as the ability to load exploit software onto target PCs.
FIREWALK is a bidirectional network implant, capable of passively collecting Gigabit Ethernet network traffic, and actively injecting Ethernet packets onto the same target network.
RAGEMASTER is a RF retro-reflector that provides an enhanced radar cross-section for VAGRANT collection. It’s concealed in a standard computer video graphics array (VGA) cable between the video card and the video monitor. It’s typically installed in the ferrite on the video cable.
And that’s it.
When I decided to post an exploit a day from the TAO implant catalog, my goal was to highlight the myriad of capabilities of the NSA’s Tailored Access Operations group, basically, its black bag teams. The catalog was published by Der Spiegel along with a pair of articles on the NSA’s CNE — that’s Computer Network Exploitation — operations, and it was just too much to digest. While the various nations’ counterespionage groups certainly pored over the details, they largely washed over us in the academic and commercial communities. By republishing a single exploit a day, I hoped we would all read and digest each individual TAO capability.
It’s important that we know the details of these attack tools. Not because we want to evade the NSA — although some of us do — but because the NSA doesn’t have a monopoly on either technology or cleverness. The NSA might have a larger budget than every other intelligence agency in the world combined, but these tools are the sorts of things that any well-funded nation-state adversary would use. And as technology advances, they are the sorts of tools we’re going to see cybercriminals use. So think of this less as what the NSA does, and more of a head start as to what everyone will be using.
Which means we need to figure out how to defend against them.
The NSA has put a lot of effort into designing software implants that evade antivirus and other detection tools, transmit data when they know they can’t be detected, and survive reinstallation of the operating system. It has software implants designed to jump air gaps without being detected. It has an impressive array of hardware implants, also designed to evade detection. And it spends a lot of effort on hacking routers and switches. These sorts of observations should become a road map for anti-malware companies.
Anyone else have observations or comments, now that we’ve seen the entire catalog? Post them in the comments section of the associated blog post.
The TAO catalog isn’t current; it’s from 2008. So the NSA has had six years to improve all of the tools in this catalog, and to add a bunch more. Figuring out how to extrapolate to current capabilities is also important.
TAO implant catalog:
Tailored Access Operations group:
Blog post, for comments:
One of the recommendations by the president’s Review Group on Intelligence and Communications Technologies on reforming the National Security Agency — No. 5, if you’re counting — is that the government should not collect and store telephone metadata. Instead, a private company — either the phone companies themselves or some other third party — should store the metadata and provide it to the government only upon a court order.
This isn’t a new idea. Over the past decade, several countries have enacted mandatory data retention laws, in which companies are required to save Internet or telephony data about customers for a specified period of time, in case the government needs it for an investigation. But does it make sense? In December, Harvard Law professor Jack Goldsmith asked: “I understand the Report’s concerns about the storage of bulk meta-data by the government. But I do not understand the Report’s implicit assumption that the storage of bulk meta-data by private entities is an improvement from the perspective of privacy, or data security, or potential abuse.”
It’s a good question, and in the almost two months since the report was released, it hasn’t received enough attention. I think the proposal makes things worse in several respects.
First, the NSA is going to do a better job at database security than corporations are. I say this not because the NSA has any magic computer security powers, but because it has more experience at it and is better funded. (And, yes, that’s true even though Edward Snowden was able to copy so many of their documents.) The difference is of degree, not of kind. Both options leave the data vulnerable to insider attacks — more so in the case of a third-party data repository because there will be more insiders. And although neither will be perfect, I would trust the NSA to protect my data *against unauthorized access* more than I would trust a private corporation to do the same.
Second, there’s the greater risk of authorized access. This is the risk that the Review Group is most concerned about. The thought is that if the data were in private hands, and the only legal way at the data was a court order, then it would be less likely for the NSA to exceed its authority by making bulk queries on the data or accessing more of it than it is allowed to. I don’t believe that this is true. Any system that has the data outside of the NSA’s control is going to include provisions for emergency access, because … well, because the word *terrorism* will scare any lawmaker enough to give the NSA that capability. Already the NSA goes through whatever legal processes it and the secret FISA court have agreed to. Adding another party into this process doesn’t slow things down, provide more oversight, or in any way make it better. I don’t trust a corporate employee not to turn data over for NSA analysis any more than I trust an NSA employee.
On the corporate side, the corresponding risk is that the data will be used for all sorts of things that wouldn’t be possible otherwise. If corporations are forced by governments to hold on to customer data, they’re going to start thinking things like: “We’re already storing this personal data on all of our customers for the government. Why don’t we mine it for interesting tidbits, use it for marketing purposes, sell it to data brokers, and on and on and on?” At least the NSA isn’t going to use our personal data for large-scale individual psychological manipulation designed to separate us from as much money as possible — which is the business model of companies like Google and Facebook.
The final claimed benefit — and this one is from the president’s Review Group — is that putting the data in private hands will make us all feel better. They write: “Knowing that the government has ready access to one’s phone call records can seriously chill ‘associational and expressive freedoms,’ and knowing that the government is one flick of a switch away from such information can profoundly ‘alter the relationship between citizen and government in a way that is inimical to society.'” Those quotes within the quote are from Justice Sonia Sotomayor’s opinion in the U.S. v. Jones GPS monitoring case.
The Review Group believes that moving the data to some other organization, either the companies that generate it in the first place or some third-party data repository, fixes that problem. But is that something we really want fixed? The fact that a government has us all under constant and ubiquitous surveillance *should* be chilling. It *should* limit freedom of expression. It is inimical to society, and to the extent we hide what we’re doing from the people or do things that only pretend to fix the problem, we do ourselves a disservice.
Where does this leave us? If the corporations are storing the data already — for some business purpose — then the answer is easy: Only they should store it. If the corporations are not already storing the data, then — on balance — it’s safer for the NSA to store the data. And in many cases, the right answer is for no one to store the data. It should be deleted because keeping it makes us all less secure.
This question is much bigger than the NSA. There are going to be data — medical data, movement data, transactional data — that are both valuable to us all in aggregate and private to us individually. And in every one of those instances, we’re going to be faced with the same question: How do we extract that societal value, while at the same protecting its personal nature? This is one of the key challenges of the Information Age, and figuring out where to store the data is a major part of that challenge. There certainly isn’t going to be one solution for all instances of this problem, but learning how to weigh the costs and benefits of different solutions will be a key component to harnessing the power of big data without suffering the societal harms.
This essay originally appeared on Slate.com, with a very misleading title.
Review Group recommendations:
U.S. v. Jones:
Commentary from Lawfare blog:
I signed an open letter from US researchers in cryptography and information security on NSA surveillance. It has received a lot of media coverage.
I am speaking at “Digital Threats and Solutions” in Copenhagen on March 20:
I am speaking at SOURCE Boston on Apr 9:
I am speaking at Yale in New Haven, CT, twice on April 10:
I am speaking at the University of Minnesota in Minneapolis on April 14:
My talk on the NSA from the RSA Conference:
This version, from MIT a few weeks earlier, is better:
I was interviewed by Joe Menn at TrustyCon about the NSA:
Various text, audio, and video interviews from the RSA Conference:
An interview about the iOS flaw:
An interview about security and power:
An interview on incident response:
Last week at the RSA Conference, we announced that we’ve integrated Co3 Systems’ incident-response coordination software with the HP ArcSight SEIM system, and that CSC is basing its incident-response service on Co3 Systems.
Providing random numbers on computers can be very difficult. Back in 2003, Niels Ferguson and I designed Fortuna as a secure PRNG. Particularly important is how it collects entropy from various processes on the computer and mixes them all together.
While Fortuna is widely used, there hadn’t been any real analysis of the system. This has now changed. A new paper by Yevgeniy Dodis, Adi Shamir, Noah Stephens-Davidowitz, and Daniel Wichs provides some theoretical modeling for entropy collection and PRNG. They analyze Fortuna and find it good but not optimal, and then provide their own optimal system.
Excellent, and long-needed, research.
Remote-Controlled System (RCS) is a piece of spyware sold exclusively to governments by a Milan company called Hacking Team. Recently, Citizen Lab found this spyware being used by the Ethiopian government against journalists, including American journalists.
More recently, Citizen Lab mapped the software and who’s using it:
Hacking Team advertises that their RCS spyware is “untraceable” to a specific government operator. However, we claim to identify a number of current or former government users of the spyware by pinpointing endpoints, and studying instances of RCS that we have observed. We suspect that agencies of these twenty-one governments are current or former users of RCS: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.
Both articles on the Citizen Lab website are worth reading; the details are fascinating. And more are coming.
Finally, congratulations to Citizen Lab for receiving a 2014 MacArthur Award for Creative and Effective Institutions, along with the $1M prize. This organization is one of the good guys, and I’m happy to see it get money to continue its work.
As insecure as passwords generally are, they’re not going away anytime soon. Every year you have more and more passwords to deal with, and every year they get easier and easier to break. You need a strategy.
The best way to explain how to choose a good password is to explain how they’re broken. The general attack model is what’s known as an offline password-guessing attack. In this scenario, the attacker gets a file of encrypted passwords from somewhere people want to authenticate to. His goal is to turn that encrypted file into unencrypted passwords he can use to authenticate himself. He does this by guessing passwords, and then seeing if they’re correct. He can try guesses as fast as his computer will process them — and he can parallelize the attack — and gets immediate confirmation if he guesses correctly. Yes, there are ways to foil this attack, and that’s why we can still have four-digit PINs on ATM cards, but it’s the correct model for breaking passwords.
There are commercial programs that do password cracking, sold primarily to police departments. There are also hacker tools that do the same thing. And they’re *really* good.
The efficiency of password cracking depends on two largely independent things: power and efficiency.
Power is simply computing power. As computers have become faster, they’re able to test more passwords per second; one program advertises eight million per second. These crackers might run for days, on many machines simultaneously. For a high-profile police case, they might run for months.
Efficiency is the ability to guess passwords cleverly. It doesn’t make sense to run through every eight-letter combination from “aaaaaaaa” to “zzzzzzzz” in order. That’s 200 billion possible passwords, most of them very unlikely. Password crackers try the most common passwords first.
A typical password consists of a root plus an appendage. The root isn’t necessarily a dictionary word, but it’s usually something pronounceable. An appendage is either a suffix (90% of the time) or a prefix (10% of the time). One cracking program I saw started with a dictionary of about 1,000 common passwords, things like “letmein,” “temp,” “123456,” and so on. Then it tested them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!,” and so on. It recovered about a quarter of all passwords with just these 100,000 combinations.
Crackers use different dictionaries: English words, names, foreign words, phonetic patterns and so on for roots; two digits, dates, single symbols and so on for appendages. They run the dictionaries with various capitalizations and common substitutions: “$” for “s”, “@” for “a,” “1” for “l” and so on. This guessing strategy quickly breaks about two-thirds of all passwords.
Modern password crackers combine different words from their dictionaries. From ArsTechnica:
What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”
This is why the oft-cited XKCD scheme for generating passwords — string together individual words like “correcthorsebatterystaple” — is no longer good advice. The password crackers are on to this trick.
The attacker will feed any personal information he has access to about the password creator into the password crackers. A good password cracker will test names and addresses from the address book, meaningful dates, and any other personal information it has. Postal codes are common appendages. If it can, the guesser will index the target hard drive and create a dictionary that includes every printable string, including deleted files. If you ever saved an e-mail with your password, or kept it in an obscure file somewhere, or if your program ever stored it in memory, this process will grab it. And it will speed the process of recovering your password.
Last year, Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break as many as possible. The winner got 90% of them, the loser 62% — in a few hours. It’s the same sort of thing we saw in 2012, 2007, and earlier. If there’s any new news, it’s that this kind of thing is getting easier faster than people think.
Pretty much anything that can be remembered can be cracked.
There’s still one scheme that works. Back in 2008, I described the “Schneier scheme”:
So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence — something personal.
Here are some examples:
WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
Wow…doestcst = Wow, does that couch smell terrible.
Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.
You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password. Of course, the site has to accept all of those non-alpha-numeric characters and an arbitrarily long password. Otherwise, it’s much harder.
Even better is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to create and store them. Password Safe includes a random password generation function. Tell it how many characters you want — twelve is my default — and it’ll give you passwords like y.)v_|.7)7Bl, B3h4_[%}kgv), and QG6,FN4nFAm_. The program supports cut and paste, so you’re not actually typing those characters very much. I’m recommending Password Safe for Windows because I wrote the first version, know the person currently in charge of the code, and trust its security. There are ports of Password Safe to other OSs, but I had nothing to do with those. There are also other password managers out there, if you want to shop around.
There’s more to passwords than simply choosing a good one:
1. Never reuse a password you care about. Even if you choose a secure password, the site it’s for could leak it because of its own incompetence. You don’t want someone who gets your password for one application or site to be able to use it for another.
2. Don’t bother updating your password regularly. Sites that require 90-day — or whatever — password upgrades do more harm than good. Unless you think your password might be compromised, don’t change it.
3. Beware the “secret question.” You don’t want a backup system for when you forget your password to be easier to break than your password. Really, it’s smart to use a password manager. Or to write your passwords down on a piece of paper and secure that piece of paper.
4. One more piece of advice: if a site offers two-factor authentication, seriously consider using it. It’s almost certainly a security improvement.
This essay previously appeared on BoingBoing.
ArsTechnica article on password cracking:
An empirical study on phrase-based passwords from 2000.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books — including “Liars and Outliers: Enabling the Trust Society Needs to Survive” — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.
Copyright (c) 2014 by Bruce Schneier.