Crypto-Gram

June 15, 2015

by Bruce Schneier
CTO, Resilient Systems, Inc.
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2015/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that *found* the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve—the most efficient algorithm for breaking a Diffie-Hellman connection—is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using Bitcoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget”:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation—just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

https://weakdh.org/
https://weakdh.org/imperfect-forward-secrecy.pdf

http://www.engadget.com/2015/05/20/…
http://arstechnica.com/security/2015/05/…
http://www.wsj.com/articles/…
http://it.slashdot.org/story/15/05/20/1258251/…

Bitcoin mining hardware:
https://en.bitcoin.it/wiki/Mining_hardware_comparison

Bamford’s comment:
http://www.wired.com/2012/03/ff_nsadatacenter/all/1

The DNI Black Budget:
http://www.washingtonpost.com/world/…

Ross Anderson’s comment:
https://www.lightbluetouchpaper.org/2015/05/02/…

GCHQ quote:
http://www.theguardian.com/world/2013/sep/05/…

NSA is putting surveillance ahead of security:
https://www.schneier.com/blog/archives/2015/03/…

Good analysis of the cryptography:
http://www.scottaaronson.com/blog/?p=2293

Good explanation of the attack by Matthew Green:
http://.cryptographyengineering.com/2015/05/…


NSA Running a Massive IDS on the Internet Backbone

The latest story from the Snowden documents, co-published by the New York Times and ProPublica, shows that the NSA is operating a signature-based intrusion detection system on the Internet backbone:

In mid-2012, Justice Department lawyers wrote two secret memos permitting the spy agency to begin hunting on Internet cables, without a warrant and on American soil, for data linked to computer intrusions originating abroad—including traffic that flows to suspicious Internet addresses or contains malware, the documents show.

The Justice Department allowed the agency to monitor only addresses and “cybersignatures”—patterns associated with computer intrusions—that it could tie to foreign governments. But the documents also note that the N.S.A. sought to target hackers even when it could not establish any links to foreign powers.

To me, the big deal here is 1) the NSA is doing this without a warrant, and 2) that the policy change happened in secret, without any public policy debate.

The effort is the latest known expansion of the N.S.A.’s warrantless surveillance program, which allows the government to intercept Americans’ cross-border communications if the target is a foreigner abroad. While the N.S.A. has long searched for specific email addresses and phone numbers of foreign intelligence targets, the Obama administration three years ago started allowing the agency to search its communications streams for less-identifying Internet protocol addresses or strings of harmful computer code.

[…]

To carry out the orders, the F.B.I. negotiated in 2012 to use the N.S.A.’s system for monitoring Internet traffic crossing “chokepoints operated by U.S. providers through which international communications enter and leave the United States,” according to a 2012 N.S.A. document. The N.S.A. would send the intercepted traffic to the bureau’s “cyberdata repository” in Quantico, Virginia.

Ninety pages of NSA documents accompany the article.

Jonathan Mayer was consulted on the article. He gives more details on his blog, which I recommend you all read.

In my view, the key takeaway is this: for over a decade, there has been a public policy debate about what role the NSA should play in domestic cybersecurity. The debate has largely presupposed that the NSA’s domestic authority is narrowly circumscribed, and that DHS and DOJ play a far greater role. Today, we learn that assumption is incorrect. The NSA already asserts broad domestic cybersecurity powers. Recognizing the scope of the NSA’s authority is particularly critical for pending legislation.

This is especially important for pending information sharing legislation, which Mayer explains.

The other big news is that ProPublica’s Julia Angwin is working with Laura Poitras on the Snowden documents. I expect that this isn’t the last article we’re going to see.

News story:
http://www.nytimes.com/2015/06/05/us/…
https://www.propublica.org/article/…

The documents:
https://www.eff.org/files/2015/06/04/…

Jonathan Mayer’s blog post:
http://webpolicy.org/2015/06/04/nsa-cybersecurity/

Julia Angwin:
http://juliaangwin.com/

Shane Harris explains how the NSA and FBI are working together on Internet surveillance.
http://www.thedailybeast.com/articles/2015/06/04/…

Benjamin Wittes says that the story is wrong, that “combating overseas cybersecurity threats from foreign governments” is exactly what the NSA is supposed to be doing, and that they don’t need a warrant for any of that.
http://www.lawfareblog.com/2015/06/…

Charlie Savage responds to Ben Wittes.
http://www.lawfareblog.com/2015/06/…

Marcy Wheeler points out that she has been saying for years that the NSA has been using Section 702 to justify Internet surveillance.
https://www.emptywheel.net/2015/06/04/…


Duqu 2.0

Kaspersky Labs has discovered and publicized details of a new nation-state surveillance malware system, called Duqu 2.0. It’s being attributed to Israel.

There’s a lot of details, and I recommend reading them. There was probably a Kerberos zero-day vulnerability involved, allowing the attackers to send updates to Kaspersky’s clients. There’s code specifically targeting anti-virus software, both Kaspersky and others. The system includes anti-sniffer defense, and packet-injection code. It’s designed to reside in RAM so that it better avoids detection. This is all very sophisticated.

Eugene Kaspersky wrote an op-ed condemning the attack—and making his company look good—and almost, but not quite, comparing attacking his company to attacking the Red Cross:

Historically companies like mine have always played an important role in the development of IT. When the number of Internet users exploded, cybercrime skyrocketed and became a serious threat to the security of billions of Internet users and connected devices. Law enforcement agencies were not prepared for the advent of the digital era, and private security companies were alone in providing protection against cybercrime—both to individuals and to businesses. The security community has been something like a group of doctors for the Internet; we even share some vocabulary with the medical profession: we talk about ‘viruses’, ‘disinfection’, etc. And obviously we’re helping law enforcement develop its skills to fight cybercrime more effectively.

One thing that struck me from a very good Wired article on Duqu 2.0:

Raiu says each of the infections began within three weeks before the P5+1 meetings occurred at that particular location. “It cannot be coincidental,” he says. “Obviously the intention was to spy on these meetings.”

Initially Kaspersky was unsure all of these infections were related, because one of the victims appeared not to be part of the nuclear negotiations. But three weeks after discovering the infection, Raiu says, news outlets began reporting that negotiations were already taking place at the site. “Somehow the attackers knew in advance that this was one of the [negotiation] locations,” Raiu says.

Exactly how the attackers spied on the negotiations is unclear, but the malware contained modules for sniffing WiFi networks and hijacking email communications. But Raiu believes the attackers were more sophisticated than this. “I don’t think their style is to infect people connecting to the WiFi. I think they were after some kind of room surveillance—to hijack the audio through the teleconference or hotel phone systems.”

Those meetings are talks about Iran’s nuclear program, which we previously believed Israel spied on. Look at the details of the attack, though: hack the hotel’s Internet, get into the phone system, and turn the hotel phones into room bugs. Very clever.

https://securelist.com//research/70504/…
https://securelist.com/files/2015/06/…

Kaspersky op-ed:
http://www.forbes.com/sites/eugenekaspersky/2015/06/…

Wired article:
http://www.wired.com/2015/06/…

Israel spying on Iranian talks:
http://www.wsj.com/articles/…


Why the Recent Section 215 Reform Debate Doesn’t Matter Much

The ACLU’s Chris Soghoian explains why the current debate over Section 215 of the Patriot Act is just a minor facet of a large and complex bulk collection program by the FBI and the NSA.

There were 180 orders authorized last year by the FISA Court under Section 215—180 orders issued by this court. Only five of those orders relate to the telephony metadata program. There are 175 orders about completely separate things. In six weeks, Congress will either reauthorize this statute or let it expire, and we’re having a debate—to the extent we’re even having a debate—but the debate that’s taking place is focused on five of the 180, and there’s no debate at all about the other 175 orders.

Now, Senator Wyden has said there are other bulk collection programs targeted at Americans that the public would be shocked to learn about. We don’t know, for example, how the government collects records from Internet providers. We don’t know how they get bulk metadata from tech companies about Americans. We don’t know how the American government gets calling card records.

If we take General Hayden at face value—and I think you’re an honest guy—if the purpose of the 215 program is to identify people who are calling Yemen and Pakistan and Somalia, where one end is in the United States, your average Somali-American is not calling Somalia from their land line phone or their cell phone for the simple reason that AT&T will charge them $7.00 a minute in long distance fees. The way that people in the diaspora call home—the way that people in the Somali or Yemeni community call their family and friends back home—they walk into convenience stores and they buy prepaid calling cards. That is how regular people make international long distance calls.

So the 215 program that has been disclosed publicly, the 215 program that is being debated publicly, is about records to major carriers like AT&T and Verizon. We have not had a debate about surveillance requests, bulk orders to calling card companies, to Skype, to voice over Internet protocol companies. Now, if NSA isn’t collecting those records, they’re not doing their job. I actually think that that’s where the most useful data is. But why are we having this debate about these records that don’t contain a lot of calls to Somalia when we should be having a debate about the records that do contain calls to Somalia and do contain records of e-mails and instant messages and searches and people posting inflammatory videos to YouTube?

Certainly the government is collecting that data, but we don’t know how they’re doing it, we don’t know at what scale they’re doing it, and we don’t know with which authority they’re doing it. And I think it is a farce to say that we’re having a debate about the surveillance authority when really, we’re just debating this very narrow usage of the statute.

Further underscoring this point, yesterday the Department of Justice’s Office of the Inspector General released a redacted version of its internal audit of the FBI’s use of Section 215: “A Review of the FBI’s Use of Section 215 Orders: Assessment of Progress in Implementing Recommendations and Examination of Use in 2007 through 2009,” following the reports of the statute’s use from 2002-2005 and 2006. (Remember that the FBI and the NSA are inexorably connected here. The order to Verizon was *from* the FBI, requiring it to turn data over *to* the NSA.)

Details about legal justifications are all in the report, but detailed data on exactly what the FBI is collecting—whether targeted or bulk—is left out. We read that the FBI demanded “customer information” (p. 36), “medical and educational records” (p. 39) “account information and electronic communications transactional records” (p. 41), “information regarding other cyber activity” (p. 42). Some of this was undoubtedly targeted against individuals; some of it was undoubtedly bulk.

I believe bulk collection is discussed in detail in Chapter VI. The chapter title is redacted, as well as the introduction (p. 46). Section A is “Bulk Telephony Metadata.” Section B (pp. 59-63) is completely redacted, including the section title. There’s a summary in the Introduction (p. 3): “In Section VI, we update the information about the uses of Section 215 authority described [redacted word] Classified Appendices to our last report. These appendices described the FBI’s use of Section 215 authority on behalf of the NSA to obtain bulk collections of telephony metadata [long redacted clause].” Sounds like a comprehensive discussion of bulk collection under Section 215.

What’s in there? As Soghoian says, certainly other communications systems like prepaid calling cards, Skype, text messaging systems, and e-mails. Search history and browser logs? Financial transactions? The “medical and educational records” mentioned above? Probably all of them—and the data is in the report, redacted (p. 29)—but there’s nothing public.

The problem is that those are the pages Congress should be debating, and not the telephony metadata program exposed by Snowden.

Soghoian quote (time 25:52-30:55):
https://www.youtube.com/watch?v=6aRklrv3r34

Justice’s Office of the Inspector General’s audit report:
http://cyberlawclinic.berkman.harvard.edu/2015/05/…
Implementing Recommendations and Examination of Use in 2007 through 2009:
https://oig.justice.gov/reports/2015/o1505.pdf

Previous audit reports:
https://oig.justice.gov/reports/2014/215-I.pdf
https://oig.justice.gov/reports/2014/215-II.pdf

Verizon order:
http://www.theguardian.com/world/interactive/2013/…

Other things the NSA is collecting:
https://twitter.com/AlexanderAbdo/status/…
https://twitter.com/AlexanderAbdo/status/…

Telephony metadata program:
http://www.theguardian.com/world/2013/jun/06/…

Marcy Wheeler’s commentary:
https://www.emptywheel.net/2015/05/21/…
https://www.emptywheel.net/2015/05/21/…


News

United Airlines offers frequent flier miles for finding security vulnerabilities—vulnerabilities on the website only, not in airport security or in the avionics.
http://www.bbc.com/news/technology-32753703

Spy dust was used by the Soviet Union during the Cold War. “A defecting agent revealed that powder containing both luminol and a substance called nitrophenyl pentadien (NPPD) had been applied to doorknobs, the floor mats of cars, and other surfaces that Americans living in Moscow had touched. They would then track or smear the substance over every surface they subsequently touched.
http://io9.com/…

New research indicates that it’s very hard to completely patch systems against vulnerabilities.
https://www.umiacs.umd.edu/~tdumitra//2015/04/…
https://www.umiacs.umd.edu/~tdumitra//2015/04/…

New Pew Research report on Americans’ attitudes on privacy, security, and surveillance.
http://www.pewinternet.org/2015/05/20/…

A man was arrested for drug dealing based on the IP address he used while querying the USPS package tracking website.
http://motherboard.vice.com/read/…
http://arstechnica.com/tech-policy/2015/05/…

Interesting story of a complex and deeply hidden bug the ZooKeeper Poison-Packet Bug—with AES as a part of it.
http://arstechnica.com/information-technology/2015/…

Riot-control stink bombs are coming to the US.
http://www.defenseone.com/technology/2015/04/…
http://www.dailymotion.com/video/…
http://news.bbc.co.uk/2/hi/middle_east/7653156.stm

Terrorist risks by city, according to actual data.
http://maplecroft.com/portfolio/new-analysis/2015/…
http://www.telegraph.co.uk/news/worldnews/11616606/…

The University of Adelaide is offering a new MOOC on “Cyberwar, Surveillance and Security.” Here’s a teaser video. I was interviewed for the class, and make a brief appearance in the teaser.
https://www.edx.org/course/…!
https://www.youtube.com/watch?v=pqceFi7KGrI

Tox is an outsourced ransomware platform that everyone can use.
https://s.mcafee.com/mcafee-labs/…

A researcher was able to steal money from Starbucks by exploiting a race condition in its gift card value-transfer protocol. Basically, by initiating two identical web transfers at once, he was able to trick the system into recording them both. Normally, you could take a $5 gift card and move that money to another $5 gift card, leaving you with an empty gift card and a $10 gift card. He was able to duplicate the transfer, giving him an empty gift card and a $15 gift card.
https://www.schneier.com/blog/archives/2015/05/…

The United Nation’s Office of the High Commissioner released a report on the value of encryption and anonymity to the world.
https://www.schneier.com/blog/archives/2015/05/…

According to a Reuters article, the US military tried to launch Stuxnet against North Korea in addition to Iran:
http://www.reuters.com/article/2015/05/29/…

Two fun NSA surveillance quizzes. Okay, maybe not so fun.
Quiz 1: “Just How Kafkaesque is the Court that Oversees NSA Spying?”
http://www.washingtonpost.com/s/the-switch/wp/…
Quiz 2: “Can You Tell the Difference Between Bush and Obama on the Patriot Act?”
http://www.theguardian.com/commentisfree/2015/may/…

The Onion on NSA Surveillance:
http://www.theonion.com/article/…
More seriously:
https://www.eff.org/deeplinks/2015/05/…

There are smart billboards in Russia that change what they display when cops are watching.
https://nakedsecurity.sophos.com/2015/06/01/…
Of course there are a gazillion ways this kind of thing will go wrong. I’m more interested in the general phenomenon of smart devices identifying us automatically and without our knowledge.

On June 1, EPIC—that’s the Electronic Privacy Information Center—had its annual Champions of Freedom Dinner. I tell you this for two reasons. One, I received a Lifetime Achievement Award. (I was incredibly honored to receive this, and I thank EPIC profusely.) And two, Apple’s CEO Tim Cook received a Champion of Freedom Award. His acceptance speech, delivered remotely, was amazing.
http://techcrunch.com/2015/06/02/…
http://www.mobilemag.com/2015/06/03/…

Yet another new biometric: brainprints.
http://www.wbng.com/news/local/…
http://cnbc.cmu.edu/~armstrong/papers/…

The news media is buzzing about how the US military identified the location of an ISIS HQ because someone there took a photo and posted it.
https://www.schneier.com/blog/archives/2015/06/…

Interesting research: “We Can Track You If You Take the Metro: Tracking Metro Riders Using Accelerometers on Smartphones”:
http://arxiv.org/abs/1505.05958v1

Interesting paper by Julie Cohen on the two fields of surveillance law and surveillance studies:
http://library.queensu.ca/ojs/index.php/…

This is interesting research: “How Near-Miss Events Amplify or Attenuate Risky Decision Making,” Catherine H. Tinsley, Robin L. Dillon, and Matthew A. Cronin.
http://create.usc.edu/sites/default/files/…

The Washington Post has a good two-part story on the history of insecurity of the Internet.
http://www.washingtonpost.com/sf/business/2015/05/…
http://www.washingtonpost.com/sf/business/2015/05/…

Uh oh. Robots are getting good with samurai swords.
http://www.zmescience.com/research/technology/…

Workshop on Security and Human Behavior (SHB) 2015.
https://www.schneier.com/blog/archives/2015/06/…


TSA Not Detecting Weapons at Security Checkpoints

This isn’t good:

An internal investigation of the Transportation Security Administration revealed security failures at dozens of the nation’s busiest airports, where undercover investigators were able to smuggle mock explosives or banned weapons through checkpoints in 95 percent of trials, ABC News has learned.

The series of tests were conducted by Homeland Security Red Teams who pose as passengers, setting out to beat the system.

According to officials briefed on the results of a recent Homeland Security Inspector General’s report, TSA agents failed 67 out of 70 tests, with Red Team members repeatedly able to get potential weapons through checkpoints.

The Acting Director of the TSA has been reassigned:

Homeland Security Secretary Jeh Johnson said in a statement Monday that Melvin Carraway would be moved to the Office of State and Local Law Enforcement at DHS headquarters “effective immediately.”

This is bad. I have often made the point that airport security doesn’t have to be 100% effective in detecting guns and bombs. Here I am in 2008:

If you’re caught at airport security with a bomb or a gun, the screeners aren’t just going to take it away from you. They’re going to call the police, and you’re going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you’ll almost certainly miss your flight. At best, you’re going to have a very unpleasant day.

This is why articles about how screeners don’t catch every—or even a majority—of guns and bombs that go through the checkpoints don’t bother me. The screeners don’t have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there’s a decent chance of getting caught, because the consequences of getting caught are too great.

A 95% failure rate is bad, because you can build a plot around sneaking something past the TSA.

I don’t know the details, or what failed. Was it the procedures or training? Was it the technology? Was it the PreCheck program? I hope we’ll learn details, and this won’t be swallowed in the great maw of government secrecy.

http://abcnews.go.com/ABCNews/…
http://abcnews.go.com/US/…

Me in 2008:
https://www.schneier.com/blog/archives/2008/09/…


Reassessing Airport Security

News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security “red team” tests was justifiably shocking. It’s clear that we’re not getting value for the $7 billion we’re paying the TSA annually.

But there’s another conclusion, inescapable and disturbing to many, but good news all around: we don’t need $7 billion worth of airport security. These results demonstrate that there isn’t much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.

We don’t need perfect airport security. We just need security that’s good enough to dissuade someone from building a plot around evading it. If you’re caught with a gun or a bomb, the TSA will detain you and call the FBI. Under those circumstances, even a medium chance of getting caught is enough to dissuade a sane terrorist. A 95% failure rate is too high, but a 20% one isn’t.

For those of us who have been watching the TSA, the 95% number wasn’t that much of a surprise. The TSA has been failing these sorts of tests since its inception: failures in 2003, a 91% failure rate at Newark Liberty International in 2006, a 75% failure rate at Los Angeles International in 2007, more failures in 2008. And those are just the public test results; I’m sure there are many more similarly damning reports the TSA has kept secret out of embarrassment.

Previous TSA excuses were that the results were isolated to a single airport, or not realistic simulations of terrorist behavior. That almost certainly wasn’t true then, but the TSA can’t even argue that now. The current test was conducted at many airports, and the testers didn’t use super-stealthy ninja-like weapon-hiding skills.

This is consistent with what we know anecdotally: the TSA misses a lot of weapons. Pretty much everyone I know has inadvertently carried a knife through airport security, and some people have told me about guns they mistakenly carried on airplanes. The TSA publishes statistics about how many guns it detects; last year, it was 2,212. This doesn’t mean the TSA missed 44,000 guns last year; a weapon that is mistakenly left in a carry-on bag is going to be easier to detect than a weapon deliberately hidden in the same bag. But we now know that it’s not hard to deliberately sneak a weapon through.

So why is the failure rate so high? The report doesn’t say, and I hope the TSA is going to conduct a thorough investigation as to the causes. My guess is that it’s a combination of things. Security screening is an incredibly boring job, and almost all alerts are false alarms. It’s very hard for people to remain vigilant in this sort of situation, and sloppiness is inevitable.

There are also technology failures. We know that current screening technologies are terrible at detecting the plastic explosive PETN—that’s what the underwear bomber had—and that a disassembled weapon has an excellent chance of getting through airport security. We know that some items allowed through airport security make excellent weapons.

The TSA is failing to defend us against the threat of terrorism. The only reason they’ve been able to get away with the scam for so long is that there isn’t much of a threat of terrorism to defend against.

Even with all these actual and potential failures, there have been no successful terrorist attacks against airplanes since 9/11. If there were lots of terrorists just waiting for us to let our guard down to destroy American planes, we would have seen attacks—attempted or successful—after all these years of screening failures. No one has hijacked a plane with a knife or a gun since 9/11. Not a single plane has blown up due to terrorism.

Terrorists are much rarer than we think, and launching a terrorist plot is much more difficult than we think. I understand this conclusion is counterintuitive, and contrary to the fearmongering we hear every day from our political leaders. But it’s what the data shows.

This isn’t to say that we can do away with airport security altogether. We need some security to dissuade the stupid or impulsive, but any more is a waste of money. The very rare smart terrorists are going to be able to bypass whatever we implement or choose an easier target. The more common stupid terrorists are going to be stopped by whatever measures we implement.

Smart terrorists are very rare, and we’re going to have to deal with them in two ways. One, we need vigilant passengers—that’s what protected us from both the shoe and the underwear bombers. And two, we’re going to need good intelligence and investigation—that’s how we caught the liquid bombers in their London apartments.

The real problem with airport security is that it’s only effective if the terrorists target airplanes. I generally am opposed to security measures that require us to correctly guess the terrorists’ tactics and targets. If we detect solids, the terrorists will use liquids. If we defend airports, they bomb movie theaters. It’s a lousy game to play, because we can’t win.

We should demand better results out of the TSA, but we should also recognize that the actual risk doesn’t justify their $7 billion budget. I’d rather see that money spent on intelligence and investigation—security that doesn’t require us to guess the next terrorist tactic and target, and works regardless of what the terrorists are planning next.

This essay previously appeared on CNN.com.
http://www.cnn.com/2015/06/05/opinions/…

The report:
http://www.cnn.com/2015/06/01/politics/…

The TSA’s budget:
http://www.fiercehomelandsecurity.com/story/…

The two classes of prohibited items:
https://www.schneier.com/blog/archives/2008/09/…

Old reports of TSA security failures
https://www.boston.com/news/local/articles/2003/10/…
http://www.homelandstupidity.us/2006/10/31/…
http://www.homelandstupidity.us/2007/10/25/…
http://edition.cnn.com/2008/US/01/28/tsa.bombtest/…

How many guns does the TSA find?:
http://.tsa.gov/2015/01/…

Weapons from items allowed through airport security:
https://www.schneier.com/blog/archives/2009/11/…

“Why Aren’t There More Terrorist Attacks?”
https://www.schneier.com/blog/archives/2010/05/…

The test results prove there’s no threat:
http://.erratasec.com/2015/06/…

Bypassing airport security:
http://news.nationalpost.com/news/…

Stupid terrorists:
http://politicalscience.osu.edu/faculty/jmueller//…


Chris Roberts and Avionics Security

Last month, I blogged about security researcher Chris Roberts being detained by the FBI after tweeting about avionics security while on a United flight:

But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There’s some serious surveillance going on.

We know a lot more of the back story from the FBI’s warrant application. He had been interviewed by the FBI multiple times previously, and was able to take control of at least some of the planes’ controls during flight.

During two interviews with F.B.I. agents in February and March of this year, Roberts said he hacked the inflight entertainment systems of Boeing and Airbus aircraft, during flights, about 15 to 20 times between 2011 and 2014. In one instance, Roberts told the federal agents he hacked into an airplane’s thrust management computer and momentarily took control of an engine, according to an affidavit attached to the application for a search warrant.

“He stated that he successfully commanded the system he had accessed to issue the ‘CLB’ or climb command. He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” said the affidavit, signed by F.B.I. agent Mike Hurley.

Roberts also told the agents he hacked into airplane networks and was able “to monitor traffic from the cockpit system.”

According to the search warrant application, Roberts said he hacked into the systems by accessing the in-flight entertainment system using his laptop and an Ethernet cable.

This makes the FBI’s behavior much more reasonable. They weren’t scanning the Twitter feed for random keywords; they were watching his account.

We don’t know if the FBI’s statements are true, though. But if Roberts was hacking an airplane while sitting in the passenger seat…wow, is that a stupid thing to do.

From the Christian Science Monitor:

But Roberts’ statements and the FBI’s actions raise as many questions as they answer. For Roberts, the question is why the FBI is suddenly focused on years-old research that has long been part of the public record.

“This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, ‘This has to be fixed,'” Roberts noted. “Is there a credible threat? Is something happening? If so, they’re not going to tell us,” he said.

Roberts isn’t the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.

“I would like to see a transcript (of the interviews),” said one former federal computer crimes prosecutor, speaking on condition of anonymity. “If he did what he said he did, why is he not in jail? And if he didn’t do it, why is the FBI saying he did?”

The real issue is that the avionics and the entertainment system are on the same network. That’s an even stupider thing to do. Also last month, I wrote about the risks of hacking airplanes, and said that I wasn’t all that worried about it. Now I’m more worried.

Previous blog entry:
https://www.schneier.com/blog/archives/2015/04/…

Warrant application:
http://aptn.ca/news/2015/05/15/…

Wired article:
http://www.wired.com/2015/05/…

Christian Science Monitor article:
http://www.csmonitor.com/World/Passcode/2015/0518/…

Avionics security issue:
http://it.slashdot.org/story/15/05/18/2033242/…

My previous essay on avionics security:
https://www.schneier.com/blog/archives/2015/04/…


Encrypting Windows Hard Drives

Encrypting your Windows hard drives is trivially easy; choosing which program to use is annoyingly difficult. I still use Windows—yes, I know, don’t even start—and have intimate experience with this issue.

Historically, I used PGP Disk. I used it because I knew and trusted the designers. I even used it after Symantec bought the company. But big companies are always suspect, because there are a lot of ways for governments to manipulate them.

Then, I used TrueCrypt. I used it because it was open source. But the anonymous developers weirdly abdicated in 2014 when Microsoft released Windows 8. I stuck with the program for a while, saying:

For Windows, the options are basically BitLocker, Symantec’s PGP Disk, and TrueCrypt. I choose TrueCrypt as the least bad of all the options.

But soon after that, despite the public audit of TrueCrypt, I bailed for BitLocker.

BitLocker is Microsoft’s native file encryption program. Yes, it’s from a big company. But it was designed by my colleague and friend Niels Ferguson, whom I trust. It was a snap decision; much had changed since 2006. Specifically, Microsoft made a bunch of changes in BitLocker for Windows 8, including removing something Niels designed called the “Elephant Diffuser.”

The Intercept’s Micah Lee recently recommended BitLocker and got a lot of pushback from the security community. Last week, he published more research and explanation about the trade-offs. It’s worth reading. Microsoft told him they removed the Elephant Diffuser for performance reasons. And I agree with his ultimate conclusion:

Based on what I know about BitLocker, I think it’s perfectly fine for average Windows users to rely on, which is especially convenient considering it comes with many PCs. If it ever turns out that Microsoft is willing to include a backdoor in a major feature of Windows, then we have much bigger problems than the choice of disk encryption software anyway.

Whatever you choose, if trusting a proprietary operating system not to be malicious doesn’t fit your threat model, maybe it’s time to switch to Linux.

Micah also nicely explains how TrueCrypt is becoming antiquated, and not keeping up with Microsoft’s file system changes.

Lately, I am liking an obscure program called BestCrypt, by a Finnish company called Jetico. Micah quotes me:

Considering Schneier has been outspoken for decades about the importance of open source cryptography, I asked if he recommends that other people use BestCrypt, even though it’s proprietary. “I do recommend BestCrypt,” Schneier told me, “because I have met people at the company and I have a good feeling about them. Of course I don’t know for sure; this business is all about trust. But right now, given what I know, I trust them.”

I know it’s not a great argument. But, again, I’m trying to find the least bad option. And in the end, you either have to write your own software or trust someone else to write it for you.

But, yes, this should be an easier decision.

Me using PGPDisk:
https://www.schneier.com/essays/archives/2007/11/…

TrueCrypt and me:
http://truecrypt.sourceforge.net/
https://www.schneier.com/blog/archives/2014/05/…
http://www.wilderssecurity.com/threads/…
http://.cryptographyengineering.com/2015/04/…

BitLocker:
https://www.schneier.com/blog/archives/2006/05/…

Niels Ferguson on backdoors in BitLocker:
http://s.msdn.com/b/si_team/archive/2006/03/02/…

Me speculating on backdoors in BitLocker:
https://www.schneier.com/blog/archives/2015/03/…

The Elephant Diffuser:
http://spi.unob.cz/presentations/23-May/…
http://css.csail.mit.edu/6.858/2012/readings/…

Micah Lee on BitLocker and hard-drive encryption:
https://firstlook.org/theintercept/2015/04/27/…
https://firstlook.org/theintercept/2015/06/04/…

BestCrypt:
http://www.jetico.com/products/personal-privacy/…

Me on open source:
https://www.schneier.com/crypto-gram/archives/1999/…


Schneier News

I’m speaking at the Norwegian Developers Conference in Oslo on June 17.
http://www.ndcoslo.com/ndc_speakers

I’ll be signing books at the Resilient Security booth on June 18 at the 27th Annual FIRST Conference in Berlin, Germany.
https://www.first.org/conference/2015/berlin

I’m speaking at the Workshop on Economics and Information Security in Delft on June 22.
http://weis2015.econinfosec.org/

I’m speaking at the 5th International Cybersecurity Conference in Tel Aviv on June 24.
http://sectech.tau.ac.il/cyberconference15/

I was just named one of the 20 Top Security Influencers by eSecurityPlanet:
http://www.esecurityplanet.com/network-security/… or https://tinyurl.com/ppupxd8

I was interviewed by the BBC on cybersecurity:
http://www.bbc.co.uk/programmes/p02snkwm#auto

I was interviewed by Strife on Data and Goliath:
http://strifeblog.org/2015/06/05/… or https://tinyurl.com/pcrt6sv

Two publications covered my talk on the Sony attack and the future of cyberconflict:
http://www.theregister.co.uk/2015/06/04/… or https://tinyurl.com/ouf9uxz
http://www.computerweekly.com/news/4500247514/… or https://tinyurl.com/ncdhula

I appeared on “The Lead” with Jake Tapper to talk about the TSA:
http://www.cnn.com/videos/tv/2015/06/02/… or https://tinyurl.com/q29cukr

I was interviewed by Wired on Data and Goliath:
http://www.wired.com/2015/05/…


Should Companies Do Most of Their Computing in the Cloud? (Part 1)

Yes. No. Yes. Maybe. Yes. Okay, it’s complicated.

The economics of cloud computing are compelling. For companies, the lower operating costs, the lack of capital expenditure, the ability to quickly scale and the ability to outsource maintenance are just some of the benefits. Computing is infrastructure, like cleaning, payroll, tax preparation and legal services. All of these are outsourced. And computing is becoming a utility, like power and water. Everyone does their power generation and water distribution “in the cloud.” Why should IT be any different?

Two reasons. The first is that IT is complicated: it is more like payroll services than like power generation. What this means is that you have to choose your cloud providers wisely, and make sure you have good contracts in place with them. You want to own your data, and be able to download that data at any time. You want assurances that your data will not disappear if the cloud provider goes out of business or discontinues your service. You want reliability and availability assurances, tech support assurances, whatever you need.

The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug.

The second reason that cloud computing is different is security. This is not an idle concern. IT security is difficult under the best of circumstances, and security risks are one of the major reasons it has taken so long for companies to embrace the cloud. And here it really gets complicated.

On the pro-cloud side, cloud providers have the potential to be far more secure than the corporations whose data they are holding. It is the same economies of scale. For most companies, the cloud provider is likely to have better security than them—by a lot. All but the largest companies benefit from the concentration of security expertise at the cloud provider.

On the anti-cloud side, the cloud provider might not meet your legal needs. You might have regulatory requirements that the cloud provider cannot meet. Your data might be stored in a country with laws you do not like—or cannot legally use. Many foreign companies are thinking twice about putting their data inside America, because of laws allowing the government to get at that data in secret. Other countries around the world have even more draconian government-access rules.

Also on the anti-cloud side, a large cloud provider is a juicier target. Whether or not this matters depends on your threat profile. Criminals already steal far more credit card numbers than they can monetize; they are more likely to go after the smaller, less-defended networks. But a national intelligence agency will prefer the one-stop shop a cloud provider affords. That is why the NSA broke into Google’s data centers.

Finally, the loss of control is a security risk. Moving your data into the cloud means that someone else is controlling that data. This is fine if they do a good job, but terrible if they do not. And for free cloud services, that loss of control can be critical. The cloud provider can delete your data on a whim, if it believes you have violated some term of service that you never even knew existed. And you have no recourse.

As a business, you need to weigh the benefits against the risks. And that will depend on things like the type of cloud service you’re considering, the type of data that’s involved, how critical the service is, how easily you could do it in house, the size of your company and the regulatory environment, and so on.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the first of three essays.
http://debates.economist.com/debate/cloud-computing
Visit the site for the other side of the debate and other commentary.


Should Companies Do Most of Their Computing in the Cloud? (Part 2)

Let me start by describing two approaches to the cloud.

Most of the students I meet at Harvard University live their lives in the cloud. Their e-mail, documents, contacts, calendars, photos and everything else are stored on servers belonging to large internet companies in America and elsewhere. They use cloud services for everything. They converse and share on Facebook and Instagram and Twitter. They seamlessly switch among their laptops, tablets and phones. It wouldn’t be a stretch to say that they don’t really care where their computers end and the internet begins, and they are used to having immediate access to all of their data on the closest screen available.

In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer—I am one of the last Eudora users—and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.

Why don’t I embrace the cloud in the same way my younger colleagues do? There are three reasons, and they parallel the trade-offs corporations faced with the same decisions are going to make.

The first is control. I want to be in control of my data, and I don’t want to give it up. I have the ability to keep control by running my own services my way. Most of those students lack the technical expertise, and have no choice. They also want services that are only available on the cloud, and have no choice. I have deliberately made my life harder, simply to keep that control. Similarly, companies are going to decide whether or not they want to—or even can—keep control of their data.

The second is security. I talked about this at length in my opening statement. Suffice it to say that I am extremely paranoid about cloud security, and think I can do better. Lots of those students don’t care very much. Again, companies are going to have to make the same decision about who is going to do a better job, and depending on their own internal resources, they might make a different decision.

The third is the big one: trust. I simply don’t trust large corporations with my data. I know that, at least in America, they can sell my data at will and disclose it to whomever they want. It can be made public inadvertently by their lax security. My government can get access to it without a warrant. Again, lots of those students don’t care. And again, companies are going to have to make the same decisions.

Like any outsourcing relationship, cloud services are based on trust. If anything, that is what you should take away from this exchange. Try to do business only with trustworthy providers, and put contracts in place to ensure their trustworthiness. Push for government regulations that establish a baseline of trustworthiness for cases where you don’t have that negotiation power. Fight laws that give governments secret access to your data in the cloud. Cloud computing is the future of computing; we need to ensure that it is secure and reliable.

Despite my personal choices, my belief is that, in most cases, the benefits of cloud computing outweigh the risks. My company, Resilient Systems, uses cloud services both to run the business and to host our own products that we sell to other companies. For us it makes the most sense. But we spend a lot of effort ensuring that we use only trustworthy cloud providers, and that we are a trustworthy cloud provider to our own customers.


Should Companies Do Most of Their Computing in the Cloud? (Part 3)

Cloud computing is the future of computing. Specialization and outsourcing make society more efficient and scalable, and computing isn’t any different.

But why aren’t we there yet? Why don’t we, in Simon Crosby’s words, “get on with it”? I have discussed some reasons: loss of control, new and unquantifiable security risks, and—above all—a lack of trust. It is not enough to simply discount them, as the number of companies not embracing the cloud shows. It is more useful to consider what we need to do to bridge the trust gap.

A variety of mechanisms can create trust. When I outsourced my food preparation to a restaurant last night, it never occurred to me to worry about food safety. That blind trust is largely created by government regulation. It ensures that our food is safe to eat, just as it ensures our paint will not kill us and our planes are safe to fly. It is all well and good for Mr. Crosby to write that cloud companies “will invest heavily to ensure that they can satisfy complex…regulations,” but this presupposes that we have comprehensive regulations. Right now, it is largely a free-for-all out there, and it can be impossible to see how security in the cloud works. When robust consumer-safety regulations underpin outsourcing, people can trust the systems.

This is true for any kind of outsourcing. Attorneys, tax preparers and doctors are licensed and highly regulated, by both governments and professional organizations. We trust our doctors to cut open our bodies because we know they are not just making it up. We need a similar professionalism in cloud computing.

Reputation is another big part of trust. We rely on both word-of-mouth and professional reviews to decide on a particular car or restaurant. But none of that works without considerable transparency. Security is an example. Mr. Crosby writes: “Cloud providers design security into their systems and dedicate enormous resources to protect their customers.” Maybe some do; many certainly do not. Without more transparency, as a cloud customer you cannot tell the difference. Try asking either Amazon Web Services or Salesforce.com to see the details of their security arrangements, or even to indemnify you for data breaches on their networks. It is even worse for free consumer cloud services like Gmail and iCloud.

We need to trust cloud computing’s performance, reliability and security. We need open standards, rules about being able to remove our data from cloud services, and the assurance that we can switch cloud services if we want to.

We also need to trust who has access to our data, and under what circumstances. One commenter wrote: “After Snowden, the idea of doing your computing in the cloud is preposterous.” He isn’t making a technical argument: a typical corporate data center isn’t any better defended than a cloud-computing one. He is making a legal argument. Under American law—and similar laws in other countries—the government can force your cloud provider to give up your data without your knowledge and consent. If your data is in your own data center, you at least get to see a copy of the court order.

Corporate surveillance matters, too. Many cloud companies mine and sell your data or use it to manipulate you into buying things. Blocking broad surveillance by both governments and corporations is critical to trusting the cloud, as is eliminating secret laws and orders regarding data access.

In the future, we will do all our computing in the cloud: both commodity computing and computing that requires personalized expertise. But this future will only come to pass when we manage to create trust in the cloud.


Eighth Movie-Plot Threat Contest Winner

On April 1, I announced the Eighth Movie-Plot Threat Contest:

I want a movie-plot threat that shows the evils of encryption. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around. Contest history here.) We’ve long heard about the evils of the Four Horsemen of the Internet Apocalypse—terrorists, drug dealers, kidnappers, and child pornographers. (Or maybe they’re terrorists, pedophiles, drug dealers, and money launderers; I can never remember.) Try to be more original than that. And nothing too science fictional; today’s technology or presumed technology only.

On May 14, I announced the five semifinalists. The votes are in, and the winner is TonyK:

November 6 2020, the morning of the presidential election. This will be the first election where votes can be cast from smart phones and laptops. A record turnout is expected.

There is much excitement as live results are being displayed all over the place. Twitter, television, apps and websites are all displaying the vote counts. It is a close race between the leading candidates until about 9 am when a third candidate starts to rapidly close the gap. He was an unknown independent that had suspected ties to multiple terrorist organizations. There was outrage when he got on to the ballot, but it had quickly died down when he put forth no campaign effort.

By 11 am the independent was predicted to win, and the software called it for him at 3:22 pm.

At 4 the CEO of the software maker was being interviewed on CNN. There were accusations of everything from bribery to bugs to hackers being responsible for the results. Demands were made for audits and recounts. Some were even asking for the data to be made publicly available. The CEO calmly explained that there could be no audit or recount. The system was encrypted end to end and all the votes were cryptographically anonymized.

The interviewer was stunned and sat there in silence. When he eventually spoke, he said “We just elected a terrorist as the President of the United States.”

For the record, Nick P was a close runner-up.

Congratulations, TonyK. Contact me by e-mail, and I’ll send you your fabulous prizes.

https://www.schneier.com/blog/archives/2015/06/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2015 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.