Blog: May 2015 Archives

Friday Squid Blogging: Nutty Conspiracy Theory Involving Both the NSA and SQUID

It's almost as if they wrote it for me.

These devices, which are known as super conducting quantum interference devices (SQUIDS for short), can be attached to NSA signals intelligence satellites and used to track the electromagnetic fields which surround each of our bodies.

These devices make it possible for agencies like the NSA (National Security Agency) to track any person via signals intelligence satellite 24 hours a day, while using *EEG Heterodyning technology to synchronize these satellites with the unique EMF brainwave print of each American citizen.

Definitely tin-foil-hat territory. I don't recommend reading it all.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

And apologies for this being late. I forgot to schedule the post.

Posted on May 31, 2015 at 4:08 PM165 Comments

UN Report on the Value of Encryption to Freedom Worldwide

The United Nation's Office of the High Commissioner released a report on the value of encryption and anonymity to the world:

Summary: In the present report, submitted in accordance with Human Rights Council resolution 25/2, the Special Rapporteur addresses the use of encryption and anonymity in digital communications. Drawing from research on international and national norms and jurisprudence, and the input of States and civil society, the report concludes that encryption and anonymity enable individuals to exercise their rights to freedom of opinion and expression in the digital age and, as such, deserve strong protection.

Here's the bottom line:

60. States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. In addition, States should refrain from making the identification of users a condition for access to digital communications and online services and requiring SIM card registration for mobile users. Corporate actors should likewise consider their own policies that restrict encryption and anonymity (including through the use of pseudonyms). Court-ordered decryption, subject to domestic and international law, may only be permissible when it results from transparent and publicly accessible laws applied solely on a targeted, case-by-case basis to individuals (i.e., not to a mass of people) and subject to judicial warrant and the protection of due process rights of individuals.

One news report called this "wishy-washy when it came to government-mandated backdoors to undermine encryption," but I don't see that. Government mandated backdoors, key escrow, and weak encryption are all bad. Corporations should offer their users strong encryption and anonymity. Any systems that still leave corporations with the keys and/or the data -- and there are going to be lots of them -- should only give them up to the government in the face of an individual and lawful court order.

I think the principles are reasonable.

Posted on May 29, 2015 at 7:49 AM39 Comments

MOOC on Cybersecurity

The University of Adelaide is offering a new MOOC on "Cyberwar, Surveillance and Security." Here's a teaser video. I was interviewed for the class, and make a brief appearance in the teaser.

Posted on May 28, 2015 at 7:19 AM10 Comments

Terrorist Risks by City, According to Actual Data

I don't know enough about the methodology to judge it, but it's interesting:

In total, 64 cities are categorised as 'extreme risk' in Verisk Maplecroft's new Global Alerts Dashboard (GAD), an online mapping and data portal that logs and analyses every reported terrorism incident down to levels of 100m² worldwide. Based on the intensity and frequency of attacks in the 12 months following February 2014, combined with the number and severity of incidents in the previous five years, six cities in Iraq top the ranking. Over this period, the country's capital, Baghdad, suffered 380 terrorist attacks resulting in 1141 deaths and 3654 wounded, making it the world's highest risk urban centre, followed by Mosul, Al Ramadi, Ba'qubah, Kirkuk and Al Hillah.

Outside of Iraq, other capital cities rated 'extreme risk' include Kabul, Afghanistan (13th most at risk), Mogadishu, Somalia (14th), Sana'a, Yemen (19th) and Tripoli, Libya (48th). However, with investment limited in conflict and post-conflict locations, it is the risk posed by terrorism in the primary cities of strategic economies, such as Egypt, Israel, Kenya, Nigeria and Pakistan that has the potential to threaten business and supply chain continuity.

A news article:

According to the index, which ranks world cities by the likelihood of a terror attack based on historic trends, 64 cities around the world are at "extreme risk" of a terror attack.

Of these, the majority are in the Middle East (27) or Asia (19).
Some 14 are in Africa, where the rise of Boko Haram and al-Shabaab as well as political instability have increased risk.

Three are in Europe -- Luhansk (46) and Donetsk (56) in Ukraine, and Grozy (54) in Russia -- while Colombia's Cali (59) is the only South American city on the list.

No US city makes the list.

Posted on May 27, 2015 at 7:50 AM38 Comments

Race Condition Exploit in Starbucks Gift Cards

A researcher was able to steal money from Starbucks by exploiting a race condition in its gift card value-transfer protocol. Basically, by initiating two identical web transfers at once, he was able to trick the system into recording them both. Normally, you could take a $5 gift card and move that money to another $5 gift card, leaving you with an empty gift card and a $10 gift card. He was able to duplicate the transfer, giving him an empty gift card and a $15 gift card.

Race-condition attacks are unreliable and it took him a bunch of tries to get it right, but there's no reason to believe that he couldn't have kept doing this forever.

Unfortunately, there was really no one at Starbucks he could tell this to:

The hardest part -- responsible disclosure. Support guy honestly answered there's absolutely no way to get in touch with technical department and he's sorry I feel this way. Emailing on March 23 was futile (and it only was answered on Apr 29). After trying really hard to find anyone who cares, I managed to get this bug fixed in like 10 days.

The unpleasant part is a guy from Starbucks calling me with nothing like "thanks" but mentioning "fraud" and "malicious actions" instead. Sweet!

A little more from BBC News:

A spokeswoman for Starbucks told BBC News: "After this individual reported he was able to commit fraudulent activity against Starbucks, we put safeguards in place to prevent replication."

The company did not answer questions about its response to Mr Homakov.

More info.

Posted on May 26, 2015 at 4:51 PM66 Comments

Stink Bombs for Riot Control

They're coming to the US:

It's called Skunk, a type of "malodorant," or in plainer language, a foul-smelling liquid. Technically nontoxic but incredibly disgusting, it has been described as a cross between "dead animal and human excrement." Untreated, the smell lingers for weeks.

The Israeli Defense Forces developed Skunk in 2008 as a crowd-control weapon for use against Palestinians. Now Mistral, a company out of Bethesda, Md., says they are providing it to police departments in the United States.


The Israelis first used it in 2008 to disperse Palestinians protesting in the West Bank. A BBC video shows its first use in action, sprayed by a hose, a system that has come to be known as the "crap cannon."

Mistral reps say Skunk, once deployed, can be "neutralized" with a special soap ­ and only with that soap. In another BBC video, an IDF spokesman describes how any attempt to wash it via regular means only exacerbates its effects. Six weeks after IDF forces used it against Palestinians at a security barrier, it still lingered in the air.

Posted on May 26, 2015 at 6:18 AM98 Comments

USPS Tracking Queries to Its Package Tracking Website

A man was arrested for drug dealing based on the IP address he used while querying the USPS package tracking website.

Posted on May 22, 2015 at 12:33 PM27 Comments

Why the Current Section 215 Reform Debate Doesn't Matter Much

The ACLU's Chris Soghoian explains (time 25:52-30:55) why the current debate over Section 215 of the Patriot Act is just a minor facet of a large and complex bulk collection program by the FBI and the NSA.

There were 180 orders authorized last year by the FISA Court under Section 215 -- 180 orders issued by this court. Only five of those orders relate to the telephony metadata program. There are 175 orders about completely separate things. In six weeks, Congress will either reauthorize this statute or let it expire, and we're having a debate -- to the extent we're even having a debate -- but the debate that's taking place is focused on five of the 180, and there's no debate at all about the other 175 orders.

Now, Senator Wyden has said there are other bulk collection programs targeted at Americans that the public would be shocked to learn about. We don't know, for example, how the government collects records from Internet providers. We don't know how they get bulk metadata from tech companies about Americans. We don't know how the American government gets calling card records.

If we take General Hayden at face value -- and I think you're an honest guy -- if the purpose of the 215 program is to identify people who are calling Yemen and Pakistan and Somalia, where one end is in the United States, your average Somali-American is not calling Somalia from their land line phone or their cell phone for the simple reason that AT&T will charge them $7.00 a minute in long distance fees. The way that people in the diaspora call home -- the way that people in the Somali or Yemeni community call their family and friends back home -- they walk into convenience stores and they buy prepaid calling cards. That is how regular people make international long distance calls.

So the 215 program that has been disclosed publicly, the 215 program that is being debated publicly, is about records to major carriers like AT&T and Verizon. We have not had a debate about surveillance requests, bulk orders to calling card companies, to Skype, to voice over Internet protocol companies. Now, if NSA isn't collecting those records, they're not doing their job. I actually think that that's where the most useful data is. But why are we having this debate about these records that don't contain a lot of calls to Somalia when we should be having a debate about the records that do contain calls to Somalia and do contain records of e-mails and instant messages and searches and people posting inflammatory videos to YouTube?

Certainly the government is collecting that data, but we don't know how they're doing it, we don't know at what scale they're doing it, and we don't know with which authority they're doing it. And I think it is a farce to say that we're having a debate about the surveillance authority when really, we're just debating this very narrow usage of the statute.

Further underscoring this point, yesterday the Department of Justice's Office of the Inspector General released a redacted version of its internal audit of the FBI's use of Section 215: "A Review of the FBI's Use of Section 215 Orders: Assessment of Progress in Implementing Recommendations and Examination of Use in 2007 through 2009," following the reports of the statute's use from 2002-2005 and 2006. (Remember that the FBI and the NSA are inexorably connected here. The order to Verizon was from the FBI, requiring it to turn data over to the NSA.)

Details about legal justifications are all in the report (see here for an important point about minimization), but detailed data on exactly what the FBI is collecting -- whether targeted or bulk -- is left out. We read that the FBI demanded "customer information" (p. 36), "medical and educational records" (p. 39) "account information and electronic communications transactional records" (p. 41), "information regarding other cyber activity" (p. 42). Some of this was undoubtedly targeted against individuals; some of it was undoubtedly bulk.

I believe bulk collection is discussed in detail in Chapter VI. The chapter title is redacted, as well as the introduction (p. 46). Section A is "Bulk Telephony Metadata." Section B (pp. 59-63) is completely redacted, including the section title. There's a summary in the Introduction (p. 3): "In Section VI, we update the information about the uses of Section 215 authority described [redacted word] Classified Appendices to our last report. These appendices described the FBI's use of Section 215 authority on behalf of the NSA to obtain bulk collections of telephony metadata [long redacted clause]." Sounds like a comprehensive discussion of bulk collection under Section 215.

What's in there? As Soghoian says, certainly other communications systems like prepaid calling cards, Skype, text messaging systems, and e-mails. Search history and browser logs? Financial transactions? The "medical and educational records" mentioned above? Probably all of them -- and the data is in the report, redacted (p. 29) -- but there's nothing public.

The problem is that those are the pages Congress should be debating, and not the telephony metadata program exposed by Snowden.

EDITED TO ADD: Marcy Wheeler is going through the document line by line.

Posted on May 22, 2015 at 5:45 AM23 Comments

New Pew Research Report on Americans' Attitudes on Privacy, Security, and Surveillance

This is interesting:

The surveys find that Americans feel privacy is important in their daily lives in a number of essential ways. Yet, they have a pervasive sense that they are under surveillance when in public and very few feel they have a great deal of control over the data that is collected about them and how it is used. Adding to earlier Pew Research reports that have documented low levels of trust in sectors that Americans associate with data collection and monitoring, the new findings show Americans also have exceedingly low levels of confidence in the privacy and security of the records that are maintained by a variety of institutions in the digital age.

While some Americans have taken modest steps to stem the tide of data collection, few have adopted advanced privacy-enhancing measures. However, majorities of Americans expect that a wide array of organizations should have limits on the length of time that they can retain records of their activities and communications. At the same time, Americans continue to express the belief that there should be greater limits on government surveillance programs. Additionally, they say it is important to preserve the ability to be anonymous for certain online activities.

Lots of detail in the reports.

Posted on May 21, 2015 at 1:05 PM28 Comments

The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here's the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn't patched. And it's the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve -- the most efficient algorithm for breaking a Diffie-Hellman connection -- is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford's 2012 comment about the NSA's cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: "Everybody's a target; everybody with communication is a target."


The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. "Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it," he says. The reason? "They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption."

And remember Director of National Intelligence James Clapper's introduction to the 2013 "Black Budget":

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It's a reasonable guess that this is what both Bamford's source and Clapper are talking about. It's an attack that requires a lot of precomputation -- just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn't going to put this capability at collection points like Room 641A at AT&T's San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a "stolen cert", presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can't.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: "Do not ask about or speculate on sources or methods underpinning Bullrun."

I remember the same instructions in documents I saw about the NSA's CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the "secure" encryption systems we were using were not secure. And we don't know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AM35 Comments

Research on Patch Deployment

New research indicates that it's very hard to completely patch systems against vulnerabilities:

It turns out that it may not be that easy to patch vulnerabilities completely. Using WINE, we analyzed the patch deployment process for 1,593 vulnerabilities from 10 Windows client applications, on 8.4 million hosts worldwide [Oakland 2015]. We found that a host may be affected by multiple instances of the same vulnerability, because the vulnerable program is installed in several directories or because the vulnerability is in a shared library distributed with several applications. For example, CVE-2011-0611 affected both the Adobe Flash Player and Adobe Reader (Reader includes a library for playing .swf objects embedded in a PDF). Because updates for the two products were distributed using different channels, the vulnerable host population decreased at different rates, as illustrated in the figure on the left. For Reader patching started 9 days after disclosure (after patch for CVE-2011-0611 was bundled with another patch in a new Reader release), and the update reached 50% of the vulnerable hosts after 152 days.

For Flash patching started earlier, 3 days after disclosure, but the patching rate soon dropped (a second patching wave, suggested by the inflection in the curve after 43 days, eventually subsided as well). Perhaps for this reason, CVE-2011-0611 was frequently targeted by exploits in 2011, using both the .swf and PDF vectors.


Posted on May 20, 2015 at 2:15 PM15 Comments

Spy Dust

Used by the Soviet Union during the Cold War:

A defecting agent revealed that powder containing both luminol and a substance called nitrophenyl pentadien (NPPD) had been applied to doorknobs, the floor mats of cars, and other surfaces that Americans living in Moscow had touched. They would then track or smear the substance over every surface they subsequently touched.

Posted on May 20, 2015 at 8:06 AM21 Comments

More on Chris Roberts and Avionics Security

Last month, I blogged about security researcher Chris Roberts being detained by the FBI after tweeting about avionics security while on a United flight:

But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There's some serious surveillance going on.

We know a lot more of the back story from the FBI's warrant application. He had been interviewed by the FBI multiple times previously, and was able to take control of at least some of the planes' controls during flight.

During two interviews with F.B.I. agents in February and March of this year, Roberts said he hacked the inflight entertainment systems of Boeing and Airbus aircraft, during flights, about 15 to 20 times between 2011 and 2014. In one instance, Roberts told the federal agents he hacked into an airplane's thrust management computer and momentarily took control of an engine, according to an affidavit attached to the application for a search warrant.

"He stated that he successfully commanded the system he had accessed to issue the 'CLB' or climb command. He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights," said the affidavit, signed by F.B.I. agent Mike Hurley.

Roberts also told the agents he hacked into airplane networks and was able "to monitor traffic from the cockpit system."

According to the search warrant application, Roberts said he hacked into the systems by accessing the in-flight entertainment system using his laptop and an Ethernet cable.

Wired has more.

This makes the FBI's behavior much more reasonable. They weren't scanning the Twitter feed for random keywords; they were watching his account.

We don't know if the FBI's statements are true, though. But if Roberts was hacking an airplane while sitting in the passenger, is that a stupid thing to do.

From the Christian Science Monitor:

But Roberts' statements and the FBI's actions raise as many questions as they answer. For Roberts, the question is why the FBI is suddenly focused on years-old research that has long been part of the public record.

"This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, 'This has to be fixed,'" Roberts noted. "Is there a credible threat? Is something happening? If so, they're not going to tell us," he said.

Roberts isn't the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.

"I would like to see a transcript (of the interviews)," said one former federal computer crimes prosecutor, speaking on condition of anonymity. "If he did what he said he did, why is he not in jail? And if he didn't do it, why is the FBI saying he did?"

The real issue is that the avionics and the entertainment system are on the same network. That's an even stupider thing to do. Also last month, I wrote about the risks of hacking airplanes, and said that I wasn't all that worried about it. Now I'm more worried.

Posted on May 19, 2015 at 8:00 AM204 Comments

United Airlines Offers Frequent Flier Miles for Finding Security Vulnerabilities

Vulnerabilities on the website only, not in airport security or in the avionics.

Posted on May 18, 2015 at 7:14 AM25 Comments

Friday Squid Blogging: NASA's Squid Rover

NASA is funding a study for a squid rover that could explore Europa's oceans.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on May 15, 2015 at 4:08 PM232 Comments

Microbe Biometric


Franzosa and colleagues used publicly available microbiome data produced through the Human Microbiome Project (HMP), which surveyed microbes in the stool, saliva, skin, and other body sites from up to 242 individuals over a months-long period. The authors adapted a classical computer science algorithm to combine stable and distinguishing sequence features from individuals' initial microbiome samples into individual-specific "codes." They then compared the codes to microbiome samples collected from the same individuals' at follow-up visits and to samples from independent groups of individuals.

The results showed that the codes were unique among hundreds of individuals, and that a large fraction of individuals' microbial "fingerprints" remained stable over a one-year sampling period. The codes constructed from gut samples were particularly stable, with more than 80% of individuals identifiable up to a year after the sampling period.

Posted on May 15, 2015 at 6:20 AM22 Comments

Eighth Movie-Plot Threat Contest Semifinalists

On April 1, I announced the Eighth Movie Plot Threat Contest: demonstrate the evils of encryption.

Not a whole lot of good submissions this year. Possibly this contest has run its course, and there's not a whole lot of interest left. On the other hand, it's heartening to know that there aren't a lot of encryption movie-plot threats out there.

Anyway, here are the semifinalists.

  1. Child pornographers.

  2. Bombing the NSA.

  3. Torture.

  4. Terrorists and a vaccine.

  5. Election systems.

Cast your vote by number here; voting closes at the end of the month.


Previous contests.

Posted on May 14, 2015 at 11:26 PM104 Comments

Admiral Rogers Speaking at the Joint Service Academy Cyber Security Summit

Admiral Mike Rogers gave the keynote address at the Joint Service Academy Cyber Security Summit today at West Point. He started by explaining the four tenets of security that he thinks about.

First: partnerships. This includes government, civilian, everyone. Capabilities, knowledge, and insight of various groups, and aligning them to generate better outcomes to everyone. Ability to generate and share insight and knowledge, and to do that in a timely manner.

Second, innovation. It's about much more than just technology. It's about ways to organize, values, training, and so on. We need to think about innovation very broadly.

Third, technology. This is a technologically based problem, and we need to apply technology to defense as well.

Fourth, human capital. If we don't get people working right, all of this is doomed to fail. We need to build security workforces inside and outside of military. We need to keep them current in a world of changing technology.

So, what is the Department of Defense doing? They're investing in cyber, both because it's a critical part of future fighting of wars and because of the mission to defend the nation.

Rogers then explained the five strategic goals listed in the recent DoD cyber strategy:

  1. Build and maintain ready forces and capabilities to conduct cyberspace operations;

  2. Defend the DoD information network, secure DoD data, and mitigate risks to DoD missions;

  3. Be prepared to defend the U.S. homeland and U.S. vital interests from disruptive or destructive cyberattacks of significant consequence;

  4. Build and maintain viable cyber options and plan to use those options to control conflict escalation and to shape the conflict environment at all stages;

  5. Build and maintain robust international alliances and partnerships to deter shared threats and increase international security and stability.

Expect to see more detailed policy around these coming goals in the coming months.

What is the role of the US CyberCommand and the NSA in all of this? The CyberCommand has three missions related to the five strategic goals. They defend DoD networks. They create the cyber workforce. And, if directed, they defend national critical infrastructure.

At one point, Rogers said that he constantly reminds his people: "If it was designed by man, it can be defeated by man." I hope he also tells this to the FBI when they talk about needing third-party access to encrypted communications.

All of this has to be underpinned by a cultural ethos that recognizes the importance of professionalism and compliance. Every person with a keyboard is both a potential asset and a threat. There needs to be well-defined processes and procedures within DoD, and a culture of following them.

What's the threat dynamic, and what's the nature of the world? The threat is going to increase; it's going to get worse, not better; cyber is a great equalizer. Cyber doesn't recognize physical geography. Four "prisms" to look at threat: criminals, nation states, hacktivists, groups wanting to do harm to the nation. This fourth group is increasing. Groups like ISIL are going to use the Internet to cause harm. Also embarrassment: releasing documents, shutting down services, and so on.

We spend a lot of time thinking about how to stop attackers from getting in; we need to think more about how to get them out once they've gotten in -- and how to continue to operate even though they are in. (That was especially nice to hear, because that's what I'm doing at my company.) Sony was a "wake-up call": a nation-state using cyber for coercion. It was theft of intellectual property, denial of service, and destruction. And it was important for the US to acknowledge the attack, attribute it, and retaliate.

Last point: "Total force approach to the problem." It's not just about people in uniform. It's about active duty military, reserve military, corporations, government contractors -- everyone. We need to work on this together. "I am not interested in endless discussion.... I am interested in outcomes." "Cyber is the ultimate team sport." There's no single entity, or single technology, or single anything, that will solve all of this. He wants to partner with the corporate world, and to do it in a way that benefits both.

First question was about the domains and missions of the respective services. Rogers talked about the inherent expertise that each service brings to the problem, and how to use cyber to extend that expertise -- and the mission. The goal is to create a single integrated cyber force, but not a single service. Cyber occurs in a broader context, and that context is applicable to all the military services. We need to build on their individual expertises and contexts, and to apply it in an integrated way. Similar to how we do special forces.

Second question was about values, intention, and what's at risk. Rogers replied that any structure for the NSA has to integrate with the nation's values. He talked about the value of privacy. He also talked about "the security of the nation." Both are imperatives, and we need to achieve both at the same time. The problem is that the nation is polarized; the threat is getting worse at the same time trust is decreasing. We need to figure out how to improve trust.

Third question was about DoD protecting commercial cyberspace. Rogers replied that the DHS is the lead organization in this regard, and DoD provides capability through that civilian authority. Any DoD partnership with the private sector will go through DHS.

Fourth question: How will DoD reach out to corporations, both established and start-ups? Many ways. By providing people to the private sectors. Funding companies, through mechanisms like the CIA's In-Q-Tel. And some sort of innovation capability. Those are the three main vectors, but more important is that the DoD mindset has to change. DoD has traditionally been very insular; in this case, more partnerships are required.

Final question was about the NSA sharing security information in some sort of semi-classified way. Rogers said that there are lot of internal conversations about doing this. It's important.

In all, nothing really new or controversial.

These comments were recorded -- I can't find them online now -- and are on the record. Much of the rest of the summit was held under Chatham House Rules. I participated in a panel on "Crypto Wars 2015" with Matt Blaze and a couple of government employees.

EDITED TO ADD (5/15): News article.

Posted on May 14, 2015 at 1:12 PM18 Comments

German Cryptanalysis of the M-209

This 1947 document describes a German machine to cryptanalyze the American M-209 mechanical encryption machine. I can't figure out anything about how it works.

EDITED TO ADD (5/14): German attacks on the M-209.

Posted on May 12, 2015 at 4:13 PM8 Comments

Amateurs Produce Amateur Cryptography

Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems.

The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable. From the paper:

Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol

Philipp Jovanovic and Samuel Neves

Abstract: This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the "OMA digest'."

We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere 13 queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only 4 queries and a time complexity of about 2^25 simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of 144 message verification queries, or one ciphertext-tag pair and 168 ciphertext verification queries.

Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.

My still-relevant 1998 essay: "Memo to the Amateur Cipher Designer." And my 1999 essay on cryptographic snake oil.

ThreatPost article. BoingBoing post.

Note: That first sentence has been called "Schneier's Law," although the sentiment is much older.

Posted on May 12, 2015 at 5:41 AM83 Comments

More on the NSA's Capabilities

Ross Anderson summarizes a meeting in Princeton where Edward Snowden was "present."

Third, the leaks give us a clear view of an intelligence analyst's workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it's a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are "ingested" into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: "Dancing oasis" is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they're moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a "stolen cert," presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can't. There's no evidence of a "wow" cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there's no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.


What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.


On the policy front, one of the eye-openers was the scale of intelligence sharing -- it's not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn't care about policy. Their OGC has 100 lawyers whose job is to "enable the mission"; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand's GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.


The spooks' lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others' laws, but there can be caveats, such as a classified version which says "this is not a binding legal document." The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

Worth reading in full.

Posted on May 11, 2015 at 6:26 AM227 Comments

Stealing a Billion

It helps if you own the banks:

The report said Shor and his associates worked together in 2012 to buy a controlling stake in three Moldovan banks and then gradually increased the banks' liquidity through a series of complex transactions involving loans being passed between the three banks and foreign entities.

The three banks then issued multimillion-dollar loans to companies that Shor either controlled or was connected to, the report said.

In the end, over $767 million disappeared from the banks in just three days through complex transactions.

A large portion of this money was transferred to offshore entities connected to Shor, according to the report. Some of the money was then deposited into Latvian bank accounts under the names of various foreigners.

Moldova's central bank was subsequently forced to bail out the three banks with $870 million in emergency loans, a move designed to keep the economy afloat.

It's an insider attack, where the insider is in charge.

What's interesting to me is not the extent of the fraud, but how electronic banking makes this sort of thing easier. And possibly easier to investigate as well.

Posted on May 8, 2015 at 6:13 AM26 Comments

Online Dating Scams

Interesting research:

We identified three types of scams happening on Jiayuan. The first one involves advertising of escort services or illicit goods, and is very similar to traditional spam. The other two are far more interesting and specific to the online dating landscape. One type of scammers are what we call swindlers. For this scheme, the scammer starts a long-distance relationship with an emotionally vulnerable victim, and eventually asks her for money, for example to purchase the flight ticket to visit her. Needless to say, after the money has been transferred the scammer disappears. Another interesting type of scams that we identified are what we call dates for profit. In this scheme, attractive young ladies are hired by the owners of fancy restaurants. The scam then consists in having the ladies contact people on the dating site, taking them on a date at the restaurant, having the victim pay for the meal, and never arranging a second date. This scam is particularly interesting, because there are good chances that the victim will never realize that he's been scammed -- in fact, he probably had a good time.

Posted on May 7, 2015 at 12:30 PM41 Comments

Another Example of Cell Phone Metadata Forensic Surveillance

Matthew Cole explains how the Italian police figured out how the CIA kidnapped Abu Omar in Milan. Interesting use of cell phone metadata, showing how valuable it is for intelligence purposes.

See also this example.

Posted on May 6, 2015 at 5:12 PM48 Comments

An Example of Cell Phone Metadata Forensic Surveillance

In this long article on the 2005 assassination of Rafik Hariri in Beirut, there's a detailed section on what the investigators were able to learn from the cell phone metadata:

At Eid's request, a judge ordered Lebanon's two cellphone companies, Alfa and MTC Touch, to produce records of calls and text messages in Lebanon in the four months before the bombing. Eid then studied the records in secret for months. He focused on the phone records of Hariri and his entourage, looking at whom they called, where they went, whom they met and when. He also followed where Adass, the supposed suicide bomber, spent time before he disappeared. He looked at all the calls that took place along the route taken by Hariri's entourage on the day of the assassination. Always he looked for cause and effect. How did one call lead to the next? "He was brilliant, just brilliant," the senior U.N. investigator told me. "He himself, on his own, developed a simple but amazingly efficient program to set about mining this massive bank of data."

The simple algorithm quickly revealed a peculiar pattern. In October 2004, just after Hariri resigned, a certain cluster of cellphones began following him and his now-reduced motorcade wherever they went. These phones stayed close day and night, until the day of the bombing -­ when nearly all 63 phones in the group immediately went dark and never worked again.


The investigators now turned their full attention to the cellphone records. Building on Eid's work, they determined that the assassins worked in groups, each with a leader and each adhering to specific procedures. Everyone in the group called the leader, and he called everyone in the group, but the lower-level operatives never called one another.

The investigators gave each group a color. The green group consisted of 18 Alfa phones, purchased with fake identification from two shops in South Beirut in July and August 2004. The purpose of the fake IDs was not to defraud Alfa out of payment; every month from September 2004 to May 2005, someone went to an Alfa office and paid all 18 bills in cash, without leaving any clue to his identity. The total phone bill for the green network, including activation fees, was $7,375 ­-- a prodigious amount, considering that 15 of the green group's 18 phones went almost entirely unused.

The first spike in call activity occurred in September 2004, immediately after Hariri announced his resignation. The investigators contend that the green group was at the center of the conspiracy. The phone number 3140023 belonged to the top leader, and the numbers 3159300 and 3150071 belonged to his two deputies. (He called them and they called him, but with those phones, they never called each other.) The two deputies carried phones belonging to other groups, through which they passed on instructions to the other participants in the operation. When a member of one group would call a group leader, the group leader would often follow up by switching to a green phone and calling the supreme leader, who was nearly always in South Beirut, where Hezbollah keeps its headquarters.

On Oct. 20, 2004, the day Hariri left office and his security detail was significantly reduced, the blue group went into operation. It originally worked according to the same rules as the green group, but its active membership increased from three phones to 15, with seven connected to Alfa and eight to MTC Touch. All of the blue phones were prepaid. Some were acquired as early as 2003 and had seen little or no use. The people who bought them also gave false identification, and again money seemed to be in plentiful supply. The minutes that expired each month went largely unused, but the phones were loaded again and again. When the blue group went dark, the phones still had unused minutes worth $4,287.

The prosecutors say the blue group followed Hariri's movements. On the morning of Oct. 20, its members were already deployed around Quraitem Palace. At 10:30 a.m., Hariri set out toward Parliament and then to the presidential palace, where Lahoud was waiting to receive his resignation. The cell towers picked up the blue group's members moving with him and calling their chief. From then on, the blue phones trailed Hariri nearly everywhere --­ to Parliament, to meetings with political leaders, to long lunches at the Saint-Georges Yacht Club & Marina. When Hariri was at his home, so were they. When he flew abroad, they moved with him to the airport and then stopped operating until he returned, when they would pick up the trail again.

Eventually, the yellow group was added....

There's a lot more. It's section 6 of the article.

See also this example.

Posted on May 6, 2015 at 7:09 AM29 Comments

Easily Cracking a Master Combination Lock


Kamkar told Ars his Master Lock exploit started with a well-known vulnerability that allows Master Lock combinations to be cracked in 100 or fewer tries. He then physically broke open a combination lock and noticed the resistance he observed was caused by two lock parts that touched in a way that revealed important clues about the combination. (He likened the Master Lock design to a side channel in cryptographic devices that can be exploited to obtain the secret key.) Kamkar then made a third observation that was instrumental to his Master Lock exploit: the first and third digit of the combination, when divided by four, always return the same remainder. By combining the insights from all three weaknesses he devised the attack laid out in the video.

Posted on May 5, 2015 at 6:59 AM20 Comments


Fox-IT has a blog post (and has published Snort rules) on how to detect man-on-the-side Internet attacks like the NSA's QUANTUMINSERT.

From a Wired article:

But hidden within another document leaked by Snowden was a slide that provided a few hints about detecting Quantum Insert attacks, which prompted the Fox-IT researchers to test a method that ultimately proved to be successful. They set up a controlled environment and launched a number of Quantum Insert attacks against their own machines to analyze the packets and devise a detection method.

According to the Snowden document, the secret lies in analyzing the first content-carrying packets that come back to a browser in response to its GET request. One of the packets will contain content for the rogue page; the other will be content for the legitimate site sent from a legitimate server. Both packets, however, will have the same sequence number. That, it turns out, is a dead giveaway.

Here's why: When your browser sends a GET request to pull up a web page, it sends out a packet containing a variety of information, including the source and destination IP address of the browser as well as so-called sequence and acknowledge numbers, or ACK numbers. The responding server sends back a response in the form of a series of packets, each with the same ACK number as well as a sequential number so that the series of packets can be reconstructed by the browser as each packet arrives to render the web page.

But when the NSA or another attacker launches a Quantum Insert attack, the victim's machine receives duplicate TCP packets with the same sequence number but with a different payload. "The first TCP packet will be the 'inserted' one while the other is from the real server, but will be ignored by the [browser]," the researchers note in their blog post. "Of course it could also be the other way around; if the QI failed because it lost the race with the real server response."

Although it's possible that in some cases a browser will receive two packets with the same sequence number from a legitimate server, they will still contain the same general content; a Quantum Insert packet, however, will have content with significant differences.

It's important we develop defenses against these attacks, because everyone is using them.

EDITED TO ADD (5/14): Detection for QI was recently released for Bro, Snort and Suricata.

Posted on May 4, 2015 at 6:17 AM55 Comments

Ears as a Biometric

It's an obvious biometric for cell phones:

Bodyprint recognizes users by their ears with 99.8% precision with a false rejection rate of only 1 out of 13.

Grip, too.

News story.

EDITED TO ADD: I blogged this in 2011.

Posted on May 1, 2015 at 12:46 PM31 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.