Blog: March 2014 Archives

The Continuing Public/Private Surveillance Partnership

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that—despite their denials—they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the public/private surveillance partnership has frayed—but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM—although not under that code name—because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs—both the NSA’s and other governments’—but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if—for example—Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft—either through bribery, coercion, threat, or legal compulsion—to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on TheAtlantic.com.

Posted on March 31, 2014 at 9:18 AM57 Comments

Creating Forensic Sketches from DNA

This seems really science fictional:

It’s already possible to make some inferences about the appearance of crime suspects from their DNA alone, including their racial ancestry and some shades of hair colour. And in 2012, a team led by Manfred Kayser of Erasmus University Medical Center in Rotterdam, the Netherlands, identified five genetic variants with detectable effects on facial shape. It was a start, but still a long way from reliable genetic photofits.

To take the idea a step further, a team led by population geneticist Mark Shriver of Pennsylvania State University and imaging specialist Peter Claes of the Catholic University of Leuven (KUL) in Belgium used a stereoscopic camera to capture 3D images of almost 600 volunteers from populations with mixed European and West African ancestry. Because people from Europe and Africa tend to have differently shaped faces, studying people with mixed ancestry increased the chances of finding genetic variants affecting facial structure.

Kayser’s study had looked for genes that affected the relative positions of nine facial “landmarks”, including the middle of each eyeball and the tip of the nose. By contrast, Claes and Shriver superimposed a mesh of more than 7000 points onto the scanned 3D images and recorded the precise location of each point. They also developed a statistical model to consider how genes, sex and racial ancestry affect the position of these points and therefore the overall shape of the face.

Next the researchers tested each of the volunteers for 76 genetic variants in genes that were already known to cause facial abnormalities when mutated. They reasoned that normal variation in genes that can cause such problems might have a subtle effect on the shape of the face. After using their model to control for the effects of sex and ancestry, they found 24 variants in 20 different genes that seemed to be useful predictors of facial shape (PLoS Genetics, DOI: 10.1371/journal.pgen.1004224).

Reconstructions based on these variants alone aren’t yet ready for routine use by crime labs, the researchers admit. Still, Shriver is already working with police to see if the method can help find the perpetrator in two cases of serial rape in Pennsylvania, for which police are desperate for new clues.

If I had to guess, I’d imagine this kind of thing is a couple of decades away. But with a large enough database of genetic data, it’s certainly possible.

Posted on March 28, 2014 at 6:22 AM29 Comments

Smarter People are More Trusting

Interesting research.

Both vocabulary and question comprehension were positively correlated with generalized trust. Those with the highest vocab scores were 34 percent more likely to trust others than those with the lowest scores, and someone who had a good perceived understanding of the survey questions was 11 percent more likely to trust others than someone with a perceived poor understanding. The correlation stayed strong even when researchers controlled for socio-economic class.

This study, too, found a correlation between trust and self-reported health and happiness. The trusting were 6 percent more likely to say they were “very happy,” and 7 percent more likely to report good or excellent health.

Full study results.

Posted on March 27, 2014 at 6:52 AM38 Comments

Geolocating Twitter Users

Interesting research into figuring out where Twitter users are located, based on similar tweets from other users:

While geotags are the most definitive location information a tweet can have, tweets can also have plenty more salient information: hashtags, FourSquare check-ins, or text references to certain cities or states, to name a few. The authors of the paper created their algorithm by analyzing the content of tweets that did have geotags and then searching for similarities in content in tweets without geotags to assess where they might have originated from. Of a body of 1.5 million tweets, 90 percent were used to train the algorithm, and 10 percent were used to test it.

The paper.

Posted on March 26, 2014 at 1:10 PM12 Comments

NSA Hacks Huawei

Both Der Spiegel and the New York Times are reporting that the NSA has hacked Huawei pretty extensively, getting copies of the company’s products’ source code and most of the e-mail from the company. Aside from being a pretty interesting story about the operational capabilities of the NSA, it exposes some pretty blatant US government hypocrisy on this issue. As former Bush administration official (and a friend of mine) Jack Goldsmith writes:

The Huawei revelations are devastating rebuttals to hypocritical U.S. complaints about Chinese penetration of U.S. networks, and also make USG protestations about not stealing intellectual property to help U.S. firms’ competitiveness seem like the self-serving hairsplitting that it is. (I have elaborated on these points many times and will not repeat them here.) “The irony is that exactly what they are doing to us is what they have always charged that the Chinese are doing through us,” says a Huawei Executive.

This isn’t to say that the Chinese are not targeting foreign networks through Huawei equipment; they almost certainly are.

Posted on March 24, 2014 at 12:51 PM110 Comments

An Open Letter to IBM's Open Letter

Last week, IBM published an “open letter” about “government access to data,” where it tried to assure its customers that it’s not handing everything over to the NSA. Unfortunately, the letter (quoted in part below) leaves open more questions than it answers.

At the outset, we think it is important for IBM to clearly state some simple facts:

  • IBM has not provided client data to the National Security Agency (NSA) or any other government agency under the program known as PRISM.
  • IBM has not provided client data to the NSA or any other government agency under any surveillance program involving the bulk collection of content or metadata.
  • IBM has not provided client data stored outside the United States to the U.S. government under a national security order, such as a FISA order or a National Security Letter.
  • IBM does not put “backdoors” in its products for the NSA or any other government agency, nor does IBM provide software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data.
  • IBM has and will continue to comply with the local laws, including data privacy laws, in all countries in which it operates.

To which I ask:

  • We know you haven’t provided data to the NSA under PRISM. It didn’t use that name with you. Even the NSA General Counsel said: “PRISM was an internal government term that as the result of leaks became the public term.” What program did you provide data to the NSA under?
  • It seems rather obvious that you haven’t provided the NSA with any data under a bulk collection surveillance program. You’re not Google; you don’t have bulk data to that extent. So why the caveat? And again, under what program did you provide data to the NSA?
  • Okay, so you say that you haven’t provided any data stored outside the US to the NSA under a national security order. Since those national security orders prohibit you from disclosing their existence, would you say anything different if you did receive them? And even if we believe this statement, it implies two questions. Why did you specifically not talk about data stored inside the US? And why did you specifically not talk about providing data under another sort of order?
  • Of course you don’t provide your source code to the NSA for the purpose of accessing client data. The NSA isn’t going to tell you that’s why it wants your source code. So, for what purposes did you provide your source code to the government? To get a contract? For audit purposes? For what?
  • Yes, we know you need to comply with all local laws, including US laws. That’s why we don’t trust you—the current secret interpretations of US law requires you to screw your customers. I’d really rather you simply said that, and worked to change those laws, than pretend that you can convince us otherwise.

EDITED TO ADD (3/25): One more thing. This article says that you are “spending more than a billion dollars to build data centers overseas to reassure foreign customers that their information is safe from prying eyes in the United States government.” Do you not know that National Security Letters require you to turn over requested data, regardless of where in the world it is stored? Or do you just hope that your customers don’t realize that?

Posted on March 24, 2014 at 6:58 AM37 Comments

New Book on Data and Power

I’m writing a new book, with the tentative title of Data and Power.

While it’s obvious that the proliferation of data affects power, it’s less clear how it does so. Corporations are collecting vast dossiers on our activities on- and off-line—initially to personalize marketing efforts, but increasingly to control their customer relationships. Governments are using surveillance, censorship, and propaganda—both to protect us from harm and to protect their own power. Distributed groups—socially motivated hackers, political dissidents, criminals, communities of interest—are using the Internet to both organize and effect change. And we as individuals are becoming both more powerful and less powerful. We can’t evade surveillance, but we can post videos of police atrocities online, bypassing censors and informing the world. How long we’ll still have those capabilities is unclear.

Understanding these trends involves understanding data. Data is generated by all computing processes. Most of it used to be thrown away, but declines in the prices of both storage and processing mean that more and more of it is now saved and used. Who saves the data, and how they use it, is a matter of extreme consequence, and will continue to be for the coming decades.

Data and Power examines these trends and more. The book looks at the proliferation and accessibility of data, and how it has enabled constant surveillance of our entire society. It examines how governments and corporations use that surveillance data, as well as how they control data for censorship and propaganda. The book then explores how data has empowered individuals and less-traditional power blocs, and how the interplay among all of these types of power will evolve in the future. It discusses technical controls on power, and the limitations of those controls. And finally, the book describes solutions to balance power in the future—both general principles for society as a whole, and specific near-term changes in technology, business, laws, and social norms.

There’s a fundamental trade-off we need to make as society. Our data is enormously valuable in aggregate, yet it’s incredibly personal. The powerful will continue to demand aggregate data, yet we have to protect its intimate details. Balancing those two conflicting values is difficult, whether it’s medical data, location data, Internet search data, or telephone metadata. But balancing them is what society needs to do, and is almost certainly the fundamental issue of the Information Age.

As I said, Data and Power is just a tentative title. Suggestions for a better one—either a title or a subtitle—are appreciated. Here are some ideas to get you started:

  • Data and Power: The Political Science of Information Security
  • The Feudal Internet: How Data Affects Power and How Power Affects Data
  • Our Data Shadow: The Battles for Power in the Information Society
  • Data.Power: The Political Science of Information Security
  • Data and Power in the Information Age
  • Data and Goliath: The Balance of Power in the Information Age
  • The Power of Data: How the Information Society Upsets Power Balances

My plan is to finish the manuscript by the end of October, for publication in February 2015. Norton will be the publisher. I’ll post a table of contents in a couple of months. And, as with my previous books, I will be asking for volunteers to read and comment on a draft version.

If you notice I’m not posting as many blog entries, or writing as many essays, this is what I’m doing instead.

Posted on March 21, 2014 at 12:19 PM94 Comments

Automatic Face-Recognition Software Getting Better

Facebook has developed a face-recognition system that works almost as well as the human brain:

Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

Human brains are optimized for facial recognition, which makes this even more impressive.

This kind of technology will change video surveillance. Right now, it’s general, and identifying people is largely a forensic activity. This will make cameras part of an automated process for identifying people.

Posted on March 20, 2014 at 7:12 AM64 Comments

MYSTIC: The NSA's Telephone Call Collection Program

The Washington Post is reporting on an NSA program called MYSTIC, which collects all—that’s 100%—of a country’s telephone calls. Those calls are stored in a database codenamed NUCLEON, and can be retrieved at a later date using a tool codenamed RETRO. This is voice, not metadata.

What’s interesting here is not the particular country whose data is being collected; that information was withheld from the article. It’s not even that the voice data is stored for a month, and then deleted. All of that can change, either at the whim of the NSA or as storage capabilities get larger. What’s interesting is that the capability exists to collect 100% of a country’s telephone calls, and the analysis tools are in place to search them.

Posted on March 18, 2014 at 3:19 PM105 Comments

Details of the Target Credit Card Breach

Long and interesting article about the Target credit card breach from last year. What’s especially interesting to me is that the attack had been preventable, but the problem was that Target messed up its incident response.

In testimony before Congress, Target has said that it was only after the U.S. Department of Justice notified the retailer about the breach in mid-December that company investigators went back to figure out what happened. What it hasn’t publicly revealed: Poring over computer logs, Target found FireEye’s alerts from Nov. 30 and more from Dec. 2, when hackers installed yet another version of the malware. Not only should those alarms have been impossible to miss, they went off early enough that the hackers hadn’t begun transmitting the stolen card data out of Target’s network. Had the company’s security team responded when it was supposed to, the theft that has since engulfed Target, touched as many as one in three American consumers, and led to an international manhunt for the hackers never would have happened at all.

This is exactly the sort of thing that my new company, Co3 Systems, solves. All of those next-generation endpoint detection systems, threat intelligence feeds, and so on only matter if you do something in response to them. If Target had had incident response procedures in place, and a system in place to ensure they followed those procedures, it would have been much more likely to have responded to the alerts it received from FireEye.

This is why I believe that incident response is the most underserved area of IT security right now.

Posted on March 17, 2014 at 9:10 AM30 Comments

Schneier Speaking Schedule: March–April

Here’s my upcoming speaking schedule for March and April.

Information about all my speaking engagements can be found here.

Posted on March 15, 2014 at 1:58 PM17 Comments

Nicholas Weaver Explains how QUANTUM Works

An excellent essay. For the non-technical, his conclusion is the most important:

Everything we’ve seen about QUANTUM and other internet activity can be replicated with a surprisingly moderate budget, using existing tools with just a little modification.

The biggest limitation on QUANTUM is location: The attacker must be able to see a request which identifies the target. Since the same techniques can work on a Wi-Fi network, a $50 Raspberry Pi, located in a Foggy Bottom Starbucks, can provide any country, big and small, with a little window of QUANTUM exploitation. A foreign government can perform the QUANTUM attack NSA-style wherever your traffic passes through their country.

And that’s the bottom line with the NSA’s QUANTUM program. The NSA does not have a monopoly on the technology, and their widespread use acts as implicit permission to others, both nation-state and criminal.

Moreover, until we fix the underlying Internet architecture that makes QUANTUM attacks possible, we are vulnerable to all of those attackers.

Posted on March 14, 2014 at 2:01 PM10 Comments

Security as a Public Health Issue

Cory Doctorow argues that computer security is analogous to public health:

I think there’s a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive.

Last year, when I finished that talk in Seattle, a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, “Well, you’ve scared the hell out of me. Now what do I do? How do I make my computers secure?”

And I had to answer: “You can’t. No one of us can. I was a systems administrator 15 years ago. That means that I’m barely qualified to plug in a WiFi router today. I can’t make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone’s anti-copying strategy.

But: If I had just stood here and spent an hour telling you about water-borne parasites; if I had told you about how inadequate water-treatment would put you and everyone you love at risk of horrifying illness and terrible, painful death; if I had explained that our very civilisation was at risk because the intelligence services were pursuing a strategy of keeping information about pathogens secret so they can weaponise them, knowing that no one is working on a cure; you would not ask me ‘How can I purify the water coming out of my tap?'”

Because when it comes to public health, individual action only gets you so far. It doesn’t matter how good your water is, if your neighbour’s water gives him cholera, there’s a good chance you’ll get cholera, too. And even if you stay healthy, you’re not going to have a very good time of it when everyone else in your country is stricken and has taken to their beds.

If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.

Posted on March 14, 2014 at 6:01 AM19 Comments

How the NSA Exploits VPN and VoIP Traffic

These four slides, released yesterday, describe one process the NSA has for eavesdropping on VPN and VoIP traffic. There’s a lot of information on these slides, though it’s a veritable sea of code names. No details as to how the NSA decrypts those ESP—”Encapsulating Security Payload”—packets, although there are some clues in the form of code names in the slides.

Posted on March 13, 2014 at 9:37 AM31 Comments

New Information on the NSA's QUANTUM Program

There’s a new (overly breathless) article on the NSA’s QUANTUM program, including a bunch of new source documents. Of particular note is this page listing a variety of QUANTUM programs. Note that QUANTUMCOOKIE, “which forces users to divulge stored cookies,” is not on this list.

I’m busy today, so please tell me anything interesting you see in the comments.

I have written previously about QUANTUM.

Posted on March 12, 2014 at 12:55 PM13 Comments

Insurance Companies Pushing for More Cybersecurity

This is a good development:

For years, said Ms Khudari, Kiln and many other syndicates had offered cover for data breaches, to help companies recover if attackers penetrated networks and stole customer information.

Now, she said, the same firms were seeking multi-million pound policies to help them rebuild if their computers and power-generation networks were damaged in a cyber-attack.

“They are all worried about their reliance on computer systems and how they can offset that with insurance,” she said.

Any company that applies for cover has to let experts employed by Kiln and other underwriters look over their systems to see if they are doing enough to keep intruders out.

Assessors look at the steps firms take to keep attackers away, how they ensure software is kept up to date and how they oversee networks of hardware that can span regions or entire countries.

Unfortunately, said Ms Khudari, after such checks were carried out, the majority of applicants were turned away because their cyber-defences were lacking.

Insurance is an excellent pressure point to influence security.

Posted on March 12, 2014 at 12:06 PM20 Comments

Postmortem: NSA Exploits of the Day

When I decided to post an exploit a day from the TAO implant catalog, my goal was to highlight the myriad of capabilities of the NSA’s Tailored Access Operations group, basically, its black bag teams. The catalog was published by Der Spiegel along with a pair of articles on the NSA’s CNE—that’s Computer Network Exploitation—operations, and it was just too much to digest. While the various nations’ counterespionage groups certainly pored over the details, they largely washed over us in the academic and commercial communities. By republishing a single exploit a day, I hoped we would all read and digest each individual TAO capability.

It’s important that we know the details of these attack tools. Not because we want to evade the NSA—although some of us do—but because the NSA doesn’t have a monopoly on either technology or cleverness. The NSA might have a larger budget than every other intelligence agency in the world combined, but these tools are the sorts of things that any well-funded nation-state adversary would use. And as technology advances, they are the sorts of tools we’re going to see cybercriminals use. So think of this less as what the NSA does, and more of a head start as to what everyone will be using.

Which means we need to figure out how to defend against them.

The NSA has put a lot of effort into designing software implants that evade antivirus and other detection tools, transmit data when they know they can’t be detected, and survive reinstallation of the operating system. It has software implants designed to jump air gaps without being detected. It has an impressive array of hardware implants, also designed to evade detection. And it spends a lot of effort on hacking routers and switches. These sorts of observations should become a road map for anti-malware companies.

Anyone else have observations or comments, now that we’ve seen the entire catalog?

The TAO catalog isn’t current; it’s from 2008. So the NSA has had six years to improve all of the tools in this catalog, and to add a bunch more. Figuring out how to extrapolate to current capabilities is also important.

Posted on March 12, 2014 at 6:31 AM50 Comments

RAGEMASTER: NSA Exploit of the Day

Today’s item—and this is the final item—from the NSA’s Tailored Access Operations (TAO) group implant catalog:

RAGEMASTER

(TS//SI//REL TO USA,FVEY) RF retro-reflector that provides an enhanced radar cross-section for VAGRANT collection. It’s concealed in a standard computer video graphics array (VGA) cable between the video card and the video monitor. It’s typically installed in the ferrite on the video cable.

(U) Capabilities
(TS//SI//REL TO USA,FVEY) RAGEMASTER provides a target for RF flooding and allows for easier collection of the VAGRANT video signal. The current RAGEMASTER unit taps the red video line on the VGA cable. It was found that, empirically, this provides the best video return and cleanest readout of the monitor contents.

(U) Concept of Operation
(TS//SI//REL TO USA,FVEY) The RAGEMASTER taps the red video line between the video card within the desktop unit and the computer monitor, typically an LCD. When the RAGEMASTER is illuminated by a radar unit, the illuminating signal is modulated with the red video information. This information is re-radiated, where it is picked up at the radar, demodulated, and passed onto the processing unit, such as a LFS-2 and an external monitor, NIGHTWATCH, GOTHAM, or (in the future) VIEWPLATE. The processor recreates the horizontal and vertical sync of the targeted monitor, thus allowing TAO personnel to see what is displayed on the targeted monitor.

Unit Cost: $30

Status: Operational. Manufactured on an as-needed basis. Contact POC for availability information.

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 11, 2014 at 2:05 PM10 Comments

The Security of the Fortuna PRNG

Providing random numbers on computers can be very difficult. Back in 2003, Niels Ferguson and I designed Fortuna as a secure PRNG. Particularly important is how it collects entropy from various processes on the computer and mixes them all together.

While Fortuna is widely used, there hadn’t been any real analysis of the system. This has now changed. A new paper by Yevgeniy Dodis, Adi Shamir, Noah Stephens-Davidowitz, and Daniel Wichs provides some theoretical modeling for entropy collection and PRNG. They analyze Fortuna and find it good but not optimal, and then provide their own optimal system.

Excellent, and long-needed, research.

Posted on March 11, 2014 at 6:28 AM36 Comments

FIREWALK: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

FIREWALK

(TS//SI//REL) FIREWALK is a bidirectional network implant, capable of passively collecting Gigabit Ethernet network traffic, and actively injecting Ethernet packets onto the same target network.

(TS//SI//REL) FIREWALK is a bi-directional 10/100/1000bT (Gigabit) Ethernet network implant residing within a dual stacked RJ45 / USB connector FIREWALK is capable of filtering and egressing network traffic over a custom RF link and injecting traffic as commanded; this allows a ethernet tunnel (VPN) to be created between target network and the ROC (or an intermediate redirector node such as DNT’s DANDERSPRITZ tool.) FIREWALK allows active exploitation of a target network with a firewall or air gap protection.

(TS//SI//REL) FIREWALK uses the HOWLERMONKEY transceiver for back-end communications. It can communicate with an LP or other compatible HOWLERMONKEY based ANT products to increase RF range through multiple hops.

Status: Prototype Available—August 2008

Unit Cost: 50 Units $537K

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 10, 2014 at 2:33 PM16 Comments

Computer Network Exploitation vs. Computer Network Attack

Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyberattacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.

When Edward Snowden revealed that the NSA has been doing exactly the same thing as the Chinese to computer networks around the world, we used much more moderate language to describe U.S. actions: words like espionage, or intelligence gathering, or spying. We stressed that it’s a peacetime activity, and that everyone does it.

The reality is somewhere in the middle, and the problem is that our intuitions are based on history.

Electronic espionage is different today than it was in the pre-Internet days of the Cold War. Eavesdropping isn’t passive anymore. It’s not the electronic equivalent of sitting close to someone and overhearing a conversation. It’s not passively monitoring a communications circuit. It’s more likely to involve actively breaking into an adversary’s computer network—be it Chinese, Brazilian, or Belgian—and installing malicious software designed to take over that network.

In other words, it’s hacking. Cyber-espionage is a form of cyber-attack. It’s an offensive action. It violates the sovereignty of another country, and we’re doing it with far too little consideration of its diplomatic and geopolitical costs.

The abbreviation-happy U.S. military has two related terms for what it does in cyberspace. CNE stands for “computer network exploitation.” That’s spying. CNA stands for “computer network attack.” That includes actions designed to destroy or otherwise incapacitate enemy networks. That’s—among other things—sabotage.

CNE and CNA are not solely in the purview of the U.S.; everyone does it. We know that other countries are building their offensive cyberwar capabilities. We have discovered sophisticated surveillance networks from other countries with names like GhostNet, Red October, The Mask. We don’t know who was behind them—these networks are very difficult to trace back to their source—but we suspect China, Russia, and Spain, respectively. We recently learned of a hacking tool called RCS that’s used by 21 governments: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.

When the Chinese company Huawei tried to sell networking equipment to the U.S., the government considered that equipment a “national security threat,” rightly fearing that those switches were backdoored to allow the Chinese government both to eavesdrop and attack US networks. Now we know that the NSA is doing the exact same thing to Americanmade equipment sold in China, as well as to those very same Huawei switches.

The problem is that, from the point of view of the object of an attack, CNE and CNA look the same as each other, except for the end result. Today’s surveillance systems involve breaking into the computers and installing malware, just as cybercriminals do when they want your money. And just like Stuxnet: the U.S./Israeli cyberweapon that disabled the Natanz nuclear facility in Iran in 2010.

This is what Microsoft’s General Counsel Brad Smith meant when he said: “Indeed, government snooping potentially now constitutes an ‘advanced persistent threat,’ alongside sophisticated malware and cyber attacks.”

When the Chinese penetrate U.S. computer networks, which they do with alarming regularity, we don’t really know what they’re doing. Are they modifying our hardware and software to just eavesdrop, or are they leaving :logic bombs” that could be triggered to do real damage at some future time? It can be impossible to tell. As a 2011 EU cybersecurity policy document stated (page 7):

…technically speaking, CNA requires CNE to be effective. In other words, what may be preparations for cyberwarfare can well be cyberespionage initially or simply be disguised as such.

We can’t tell the intentions of the Chinese, and they can’t tell ours, either.

Much of the current debate in the U.S. is over what the NSA should be allowed to do, and whether limiting the NSA somehow empowers other governments. That’s the wrong debate. We don’t get to choose between a world where the NSA spies and one where the Chinese spy. Our choice is between a world where our information infrastructure is vulnerable to all attackers or secure for all users.

As long as cyber-espionage equals cyber-attack, we would be much safer if we focused the NSA’s efforts on securing the Internet from these attacks. True, we wouldn’t get the same level of access to information flows around the world. But we would be protecting the world’s information flows—including our own—from both eavesdropping and more damaging attacks. We would be protecting our information flows from governments, nonstate actors, and criminals. We would be making the world safer.

Offensive military operations in cyberspace, be they CNE or CNA, should be the purview of the military. In the U.S., that’s CyberCommand. Such operations should be recognized as offensive military actions, and should be approved at the highest levels of the executive branch, and be subject to the same international law standards that govern acts of war in the offline world.

If we’re going to attack another country’s electronic infrastructure, we should treat it like any other attack on a foreign country. It’s no longer just espionage, it’s a cyber-attack.

This essay previously appeared on TheAtlantic.com.

Posted on March 10, 2014 at 6:46 AM23 Comments

COTTONMOUTH-III: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

COTTONMOUTH-III

(TS//SI//REL) COTTONMOUTH-III (CM-III) is a Universal Serial Bus (USB) hardware implant, which will provide a wireless bridge into a target network as well as the ability to load exploit software onto target PCs.

(TS//SI//REL) CM-III will provide air-gap bridging, software persistence capability, “in-field” re-programmability, and covert communications with a host software implant over the USB. The RF link will enable command and data infiltration and exfiltration. CM-III will also communicate with Data Network Technologies (DNT) software (STRAITBIZARRE) through a covert channel implemented on the USB, using this communication channel to pass commands and data between hardware and software implants. CM-III will be a GENIE-compliant implant based on CHIMNEYPOOL.

(TS//SI//REL) CM-III conceals digital components (TRINITY), USB 2.0 HS hub, switches, and HOWLERMONKEY (HM) RF Transceiver within a RJ45 Dual Stacked USB connector. CM-I has the ability to communicate to other CM devices over the RF link using an over-the-air protocol called SPECULATION. CM-III can provide a short range inter-chassis link to other CM devices or an intra-chassis RF link to a long haul relay subsystem.

Status: Availability—May 2009

Unit Cost: 50 units: $1,248K

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 7, 2014 at 2:41 PM10 Comments

Academic Paper Spam

There seems to be an epidemic of computer-generated nonsense academic papers.

Labbé does not know why the papers were submitted—or even if the authors were aware of them. Most of the conferences took place in China, and most of the fake papers have authors with Chinese affiliations. Labbé has emailed editors and authors named in many of the papers and related conferences but received scant replies; one editor said that he did not work as a program chair at a particular conference, even though he was named as doing so, and another author claimed his paper was submitted on purpose to test out a conference, but did not respond on follow-up. Nature has not heard anything from a few enquiries.

In this arms race between fake-paper-generator and fake-paper-detector, the advantage goes to the detector.

Posted on March 7, 2014 at 6:13 AM28 Comments

COTTONMOUTH-II: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

COTTONMOUTH-II

(TS//SI//REL) COTTONMOUTH-II (CM-II) is a Universal Serial Bus (USB) hardware Host Tap, which will provide a covert link over USB link into a target network. CM-II is intended to be operate with a long haul relay subsystem, which is co-located within the target equipment. Further integration is needed to turn this capability into a deployable system.

(TS//SI//REL) CM-II will provide software persistence capability, “in-field” re-programmability, and covert communications with a host software implant over the USB. CM-II will also communicate with Data Network Technologies (DNT) software (STRAITBIZARRE) through a covert channel implemented on the USB, using this communication channel to pass commands and data between hardware and software implants. CM-II will be a GENIE-compliant implant based on CHIMNEYPOOL.

(TS//SI//REL) CM-II consists of the CM-I digital hardware and the long haul relay concealed somewhere within the target chassis. A USB 2.0 HS hub with switches is concealed in a dual stacked USB connector, and the two parts are hard-wired, providing a intra-chassis link. The long haul relay provides the wireless bridge into the target’s network.

Unit Cost: 50 units: $200K

Status: Availability—September 2008

Status: Availability—January 2009

Unit Cost: 50 units: $1,015K

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 6, 2014 at 2:18 PM6 Comments

Wi-Fi Virus

Researchers have demonstrated the first airborne Wi-Fi computer virus. The paper, by Jonny Milliken, Valerio Selis, and Alan Marshall, is “Detection and analysis of the Chameleon WiFi access point virus,” EURASIP Journal on Information Security.

Abstract: This paper analyses and proposes a novel detection strategy for the ‘Chameleon’ WiFi AP-AP virus. Previous research has considered virus construction, likely virus behaviour and propagation methods. The research here describes development of an objective measure of virus success, the impact of product susceptibility, the acceleration of infection and the growth of the physical area covered by the virus. An important conclusion of this investigation is that the connectivity between devices in the victim population is a more significant influence on virus propagation than any other factor. The work then proposes and experimentally verifies the application of a detection method for the virus. This method utilises layer 2 management frame information which can detect the attack while maintaining user privacy and user confidentiality, a key requirement in many security solutions.

Posted on March 6, 2014 at 5:44 AM14 Comments

COTTONMOUTH-I: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

COTTONMOUTH-I

(TS//SI//REL) COTTONMOUTH-I (CM-I) is a Universal Serial Bus (USB) hardware implant which will provide a wireless bridge into a target network as well as the ability to load exploit software onto target PCs.

(TS//SI//REL) CM-I will provide air-gap bridging, software persistence capability, “in-field” re-programmability, and covert communications with a host software implant over the USB. The RF link will enable command and data infiltration and exfiltration. CM-I will also communicate with Data Network Technologies (DNT) software (STRAITBIZARRE) through a covert channel implemented on the USB, using this communication channel to pass commands and data between hardware and software implants. CM-I will be a GENIE-compliant implant based on CHIMNEYPOOL.

(TS//SI//REL) CM-I conceals digital components (TRINITY), USB 1.1 FS hub, switches, and HOWLERMONKEY (HM) RF Transceiver within the USB Series-A cable connector. MOCCASIN is the version permanently connected to a USB keyboard. Another version can be made with an unmodified USB connector at the other end. CM-I has the ability to communicate to other CM devices over the RF link using an over-the-air protocol called SPECULATION.

Status: Availability—January 2009

Unit Cost: 50 units: $1,015K

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 5, 2014 at 2:27 PM18 Comments

Surveillance by Algorithm

Increasingly, we are watched not by people but by algorithms. Amazon and Netflix track the books we buy and the movies we stream, and suggest other books and movies based on our habits. Google and Facebook watch what we do and what we say, and show us advertisements based on our behavior. Google even modifies our web search results based on our previous behavior. Smartphone navigation apps watch us as we drive, and update suggested route information based on traffic congestion. And the National Security Agency, of course, monitors our phone calls, emails and locations, then uses that information to try to identify terrorists.

Documents provided by Edward Snowden and revealed by the Guardian today show that the UK spy agency GHCQ, with help from the NSA, has been collecting millions of webcam images from innocent Yahoo users. And that speaks to a key distinction in the age of algorithmic surveillance: is it really okay for a computer to monitor you online, and for that data collection and analysis only to count as a potential privacy invasion when a person sees it? I say it’s not, and the latest Snowden leaks only make more clear how important this distinction is.

The robots-vs-spies divide is especially important as we decide what to do about NSA and GCHQ surveillance. The spy community and the Justice Department have reported back early on President Obama’s request for changing how the NSA “collects” your data, but the potential reforms—FBI monitoring, holding on to your phone records and more—still largely depend on what the meaning of “collects” is.

Indeed, ever since Snowden provided reporters with a trove of top secret documents, we’ve been subjected to all sorts of NSA word games. And the word “collect” has a very special definition, according to the Department of Defense (DoD). A 1982 procedures manual (pdf; page 15) says: “information shall be considered as ‘collected’ only when it has been received for use by an employee of a DoD intelligence component in the course of his official duties.” And “data acquired by electronic means is ‘collected’ only when it has been processed into intelligible form.”

Director of National Intelligence James Clapper likened the NSA’s accumulation of data to a library. All those books are stored on the shelves, but very few are actually read. “So the task for us in the interest of preserving security and preserving civil liberties and privacy,” says Clapper, “is to be as precise as we possibly can be when we go in that library and look for the books that we need to open up and actually read.” Only when an individual book is read does it count as “collection,” in government parlance.

So, think of that friend of yours who has thousands of books in his house. According to the NSA, he’s not actually “collecting” books. He’s doing something else with them, and the only books he can claim to have “collected” are the ones he’s actually read.

This is why Clapper claims—to this day—that he didn’t lie in a Senate hearing when he replied “no” to this question: “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”

If the NSA collects—I’m using the everyday definition of the word here—all of the contents of everyone’s e-mail, it doesn’t count it as being collected in NSA terms until someone reads it. And if it collects—I’m sorry, but that’s really the correct word—everyone’s phone records or location information and stores it in an enormous database, that doesn’t count as being collected—NSA definition—until someone looks at it. If the agency uses computers to search those emails for keywords, or correlates that location information for relationships between people, it doesn’t count as collection, either. Only when those computers spit out a particular person has the data—in NSA terms—actually been collected.

If the modern spy dictionary has you confused, maybe dogs can help us understand why this legal workaround, by big tech companies and the government alike, is still a serious invasion of privacy.

Back when Gmail was introduced, this was Google’s defense, too, about its context-sensitive advertising. Google’s computers examine each individual email and insert an advertisement nearby, related to the contents of your email. But no person at Google reads any Gmail messages; only a computer does. In the words of one Google executive: “Worrying about a computer reading your email is like worrying about your dog seeing you naked.”

But now that we have an example of a spy agency seeing people naked—there are a surprising number of sexually explicit images in the newly revealed Yahoo image collection—we can more viscerally understand the difference.

To wit: when you’re watched by a dog, you know that what you’re doing will go no further than the dog. The dog can’t remember the details of what you’ve done. The dog can’t tell anyone else. When you’re watched by a computer, that’s not true. You might be told that the computer isn’t saving a copy of the video, but you have no assurance that that’s true. You might be told that the computer won’t alert a person if it perceives something of interest, but you can’t know if that’s true. You do know that the computer is making decisions based on what it receives, and you have no way of confirming that no human being will access that decision.

When a computer stores your data, there’s always a risk of exposure. There’s the risk of accidental exposure, when some hacker or criminal breaks in and steals the data. There’s the risk of purposeful exposure, when the organization that has your data uses it in some manner. And there’s the risk that another organization will demand access to the data. The FBI can serve a National Security Letter on Google, demanding details on your email and browsing habits. There isn’t a court order in the world that can get that information out of your dog.

Of course, any time we’re judged by algorithms, there’s the potential for false positives. You are already familiar with this; just think of all the irrelevant advertisements you’ve been shown on the Internet, based on some algorithm misinterpreting your interests. In advertising, that’s okay. It’s annoying, but there’s little actual harm, and you were busy reading your email anyway, right? But that harm increases as the accompanying judgments become more important: our credit ratings depend on algorithms; how we’re treated at airport security does, too. And most alarming of all, drone targeting is partly based on algorithmic surveillance.

The primary difference between a computer and a dog is that the computer interacts with other people in the real world, and the dog does not. If someone could isolate the computer in the same way a dog is isolated, we wouldn’t have any reason to worry about algorithms crawling around in our data. But we can’t. Computer algorithms are intimately tied to people. And when we think of computer algorithms surveilling us or analyzing our personal data, we need to think about the people behind those algorithms. Whether or not anyone actually looks at our data, the very fact that they even could is what makes it surveillance.

This is why Yahoo called GCHQ’s webcam-image collection “a whole new level of violation of our users’ privacy.” This is why we’re not mollified by attempts from the UK equivalent of the NSA to apply facial recognition algorithms to the data, or to limit how many people viewed the sexually explicit images. This is why Google’s eavesdropping is different than a dog’s eavesdropping, and why the NSA’s definition of “collect” makes no sense whatsoever.

This essay previously appeared on theguardian.com.

Posted on March 5, 2014 at 6:13 AM64 Comments

WATERWITCH: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

WATERWITCH

(S//SI) Hand held finishing tool used for geolocating targeted handsets in the field.

(S//SI) Features:

  • Split display/controller for flexible deployment capability
  • External antenna for DFing target; internal antenna for communication with active interrogator
  • Multiple technology capability based on SDR Platform; currently UMTS, with GSM and CDMA2000 under development
  • Approximate size 3″ x 7.5″ x 1.25″ (radio), 2.5″ x 5″ x 0.75″ (display); radio shrink in planning stages
  • Display uses E-Ink technology for low light emissions

(S//SI) Tactical Operators use WATERWITCH to locate handsets (last mile) where handset is connected to Typhon or similar equipment interrogator. WATERWITCH emits tone and gives signal strength of target handset. Directional antenna on unit allos operator to locate specific handset.

Status: Under Development. Available FY-20008
LRIP Production due August 2008

Unit Cost:

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 4, 2014 at 2:23 PM15 Comments

How NIST Develops Cryptographic Standards

This document gives a good overview of how NIST develops cryptographic standards and guidelines. It’s still in draft, and comments are appreciated.

Given that NIST has been tainted by the NSA’s actions to subvert cryptographic standards and protocols, more transparency in this process is appreciated. I think NIST is doing a fine job and that it’s not shilling for the NSA, but it needs to do more to convince the world of that.

Posted on March 4, 2014 at 6:41 AM22 Comments

TYPHON HX: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

TYPHON HX

(S//SI//FVEY) Base Station Router – Network-In-a-Box (NIB) supporting GSM bands 850/900/1800/1900 and associated full GSM signaling and call control.

(S//SI//FVEY) Tactical SIGINT elements use this equipment to find, fix and finish targeted handset users.

(S//SI) Target GSM handset registers with BSR unit.

(S//SI) Operators are able to geolocate registered handsets, capturing the user.

(S//SI//REL) The macro-class Typhon is a Network-In-a-Box (NIB), which includes all the necessary architecture to support Mobile Station call processing and SMS messaging in a stand-alone chassis with a pre-provisioning capability.

(S//SI//REL) The Typhon system kit includes the amplified Typhon system, OAM&P Laptop, cables, antennas and AD/DC power supply.

(U//FOUO) An 800 WH LiIon Battery kit is offered separately.

(U) A bracket and mounting kit are available upon request.

(U) Status: Available 4 mos ARO

(S//SI//REL) Operational Restrictions exist for equipment deployment.

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on March 3, 2014 at 2:19 PM11 Comments

Choosing Secure Passwords

As insecure as passwords generally are, they’re not going away anytime soon. Every year you have more and more passwords to deal with, and every year they get easier and easier to break. You need a strategy.

The best way to explain how to choose a good password is to explain how they’re broken. The general attack model is what’s known as an offline password-guessing attack. In this scenario, the attacker gets a file of encrypted passwords from somewhere people want to authenticate to. His goal is to turn that encrypted file into unencrypted passwords he can use to authenticate himself. He does this by guessing passwords, and then seeing if they’re correct. He can try guesses as fast as his computer will process them—and he can parallelize the attack—and gets immediate confirmation if he guesses correctly. Yes, there are ways to foil this attack, and that’s why we can still have four-digit PINs on ATM cards, but it’s the correct model for breaking passwords.

There are commercial programs that do password cracking, sold primarily to police departments. There are also hacker tools that do the same thing. And they’re really good.

The efficiency of password cracking depends on two largely independent things: power and efficiency.

Power is simply computing power. As computers have become faster, they’re able to test more passwords per second; one program advertises eight million per second. These crackers might run for days, on many machines simultaneously. For a high-profile police case, they might run for months.

Efficiency is the ability to guess passwords cleverly. It doesn’t make sense to run through every eight-letter combination from “aaaaaaaa” to “zzzzzzzz” in order. That’s 200 billion possible passwords, most of them very unlikely. Password crackers try the most common passwords first.

A typical password consists of a root plus an appendage. The root isn’t necessarily a dictionary word, but it’s usually something pronounceable. An appendage is either a suffix (90% of the time) or a prefix (10% of the time). One cracking program I saw started with a dictionary of about 1,000 common passwords, things like “letmein,” “temp,” “123456,” and so on. Then it tested them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!,” and so on. It recovered about a quarter of all passwords with just these 100,000 combinations.

Crackers use different dictionaries: English words, names, foreign words, phonetic patterns and so on for roots; two digits, dates, single symbols and so on for appendages. They run the dictionaries with various capitalizations and common substitutions: “$” for “s”, “@” for “a,” “1” for “l” and so on. This guessing strategy quickly breaks about two-thirds of all passwords.

Modern password crackers combine different words from their dictionaries:

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

This is why the oft-cited XKCD scheme for generating passwords—string together individual words like “correcthorsebatterystaple”—is no longer good advice. The password crackers are on to this trick.

The attacker will feed any personal information he has access to about the password creator into the password crackers. A good password cracker will test names and addresses from the address book, meaningful dates, and any other personal information it has. Postal codes are common appendages. If it can, the guesser will index the target hard drive and create a dictionary that includes every printable string, including deleted files. If you ever saved an e-mail with your password, or kept it in an obscure file somewhere, or if your program ever stored it in memory, this process will grab it. And it will speed the process of recovering your password.

Last year, Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break as many as possible. The winner got 90% of them, the loser 62%—in a few hours. It’s the same sort of thing we saw in 2012, 2007, and earlier. If there’s any new news, it’s that this kind of thing is getting easier faster than people think.

Pretty much anything that can be remembered can be cracked.

There’s still one scheme that works. Back in 2008, I described the “Schneier scheme”:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Here are some examples:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password. Of course, the site has to accept all of those non-alpha-numeric characters and an arbitrarily long password. Otherwise, it’s much harder.

Even better is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to create and store them. Password Safe includes a random password generation function. Tell it how many characters you want—twelve is my default—and it’ll give you passwords like y.)v_|.7)7Bl, B3h4_[%}kgv), and QG6,FN4nFAm_. The program supports cut and paste, so you’re not actually typing those characters very much. I’m recommending Password Safe for Windows because I wrote the first version, know the person currently in charge of the code, and trust its security. There are ports of Password Safe to other OSs, but I had nothing to do with those. There are also other password managers out there, if you want to shop around.

There’s more to passwords than simply choosing a good one:

  1. Never reuse a password you care about. Even if you choose a secure password, the site it’s for could leak it because of its own incompetence. You don’t want someone who gets your password for one application or site to be able to use it for another.
  2. Don’t bother updating your password regularly. Sites that require 90-day—or whatever—password upgrades do more harm than good. Unless you think your password might be compromised, don’t change it.
  3. Beware the “secret question.” You don’t want a backup system for when you forget your password to be easier to break than your password. Really, it’s smart to use a password manager. Or to write your passwords down on a piece of paper and secure that piece of paper.
  4. One more piece of advice: if a site offers two-factor authentication, seriously consider using it. It’s almost certainly a security improvement.

This essay previously appeared on BoingBoing.

Posted on March 3, 2014 at 7:48 AM235 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.