Page 443

A Fraying of the Public/Private Surveillance Partnership

The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users’ and customers’ data.

Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies’ cooperation has become public, they’re facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.

How much is unclear. In July, right after the PRISM revelations, the Cloud Security Alliance reported that US cloud companies could lose $35 billion over the next three years, mostly due to losses of foreign sales. Surely that number has increased as outrage over NSA spying continues to build in Europe and elsewhere. There is no similar report for software sales, although I have attended private meetings where several large US software companies complained about the loss of foreign sales. On the hardware side, IBM is losing business in China. The US telecom companies are also suffering: AT&T is losing business worldwide.

This is the new reality. The rules of secrecy are different, and companies have to assume that their responses to NSA data demands will become public. This means there is now a significant cost to cooperating, and a corresponding benefit to fighting.

Over the past few months, more companies have woken up to the fact that the NSA is basically treating them as adversaries, and are responding as such. In mid-October, it became public that the NSA was collecting e-mail address books and buddy lists from Internet users logging into different service providers. Yahoo, which didn’t encrypt those user connections by default, allowed the NSA to collect much more of its data than Google, which did. That same day, Yahoo announced that it would implement SSL encryption by default for all of its users. Two weeks later, when it became public that the NSA was collecting data on Google users by eavesdropping on the company’s trunk connections between its data centers, Google announced that it would encrypt those connections.

We recently learned that Yahoo fought a government order to turn over data. Lavabit fought its order as well. Apple is now tweaking the government. And we think better of those companies because of it.

Now Lavabit, which closed down its e-mail service rather than comply with the NSA’s request for the master keys that would compromise all of its customers, has teamed with Silent Circle to develop a secure e-mail standard that is resistant to these kinds of tactics.

The Snowden documents made it clear how much the NSA relies on corporations to eavesdrop on the Internet. The NSA didn’t build a massive Internet eavesdropping system from scratch. It noticed that the corporate world was already eavesdropping on every Internet user—surveillance is the business model of the Internet, after all—and simply got copies for itself.

Now, that secret ecosystem is breaking down. Supreme Court Justice Louis Brandeis wrote about transparency, saying “Sunlight is said to be the best of disinfectants.” In this case, it seems to be working.

These developments will only help security. Remember that while Edward Snowden has given us a window into the NSA’s activities, these sorts of tactics are probably also used by other intelligence services around the world. And today’s secret NSA programs become tomorrow’s PhD theses, and the next day’s criminal hacker tools. It’s impossible to build an Internet where the good guys can eavesdrop, and the bad guys cannot. We have a choice between an Internet that is vulnerable to all attackers, or an Internet that is safe from all attackers. And a safe and secure Internet is in everyone’s best interests, including the US’s.

This essay previously appeared on TheAtlantic.com.

Posted on November 14, 2013 at 6:21 AMView Comments

Microsoft Retiring SHA-1 in 2016

I think this is a good move on Microsoft’s part:

Microsoft is recommending that customers and CA’s stop using SHA-1 for cryptographic applications, including use in SSL/TLS and code signing. Microsoft Security Advisory 2880823 has been released along with the policy announcement that Microsoft will stop recognizing the validity of SHA-1 based certificates after 2016.

More news.

SHA-1 isn’t broken yet in a practical sense, but the algorithm is barely hanging on and attacks will only get worse. Migrating away from SHA-1 is the smart thing to do.

Posted on November 13, 2013 at 2:17 PMView Comments

Another QUANTUMINSERT Attack Example

Der Spiegel is reporting that the GCHQ used QUANTUMINSERT to direct users to fake LinkedIn and Slashdot pages run by—this code name is not in the article—FOXACID servers. There’s not a lot technically new in the article, but we do get some information about popularity and jargon.

According to other secret documents, Quantum is an extremely sophisticated exploitation tool developed by the NSA and comes in various versions. The Quantum Insert method used with Belgacom is especially popular among British and US spies. It was also used by GCHQ to infiltrate the computer network of OPEC’s Vienna headquarters.

The injection attempts are known internally as “shots,” and they have apparently been relatively successful, especially the LinkedIn version. “For LinkedIn the success rate per shot is looking to be greater than 50 percent,” states a 2012 document.

Slashdot has reacted to the story.

I wrote about QUANTUMINSERT, and the whole infection process, here. We have a list of “implants” that the NSA uses to “exfiltrate” information here.

Posted on November 13, 2013 at 6:46 AMView Comments

Bizarre Online Gambling Movie-Plot Threat

This article argues that online gambling is a strategic national threat because terrorists could use it to launder money.

The Harper demonstration showed the technology and techniques that terror and crime organizations could use to operate untraceable money laundering built on a highly liquid legalized online poker industry—just the environment that will result from the spread of poker online.

[…]

A single poker game takes just a few hours to transfer $5 million as was recently demonstrated—legally—by American player Brian Hastings with his Swedish competitor half a world away. An established al-Qaida poker network could extract from the United States enough untraceable money in six days to fund an operation like the 9/11 attack on the World Trade Center.

I’m impressed with the massive fear resonating in this essay.

Posted on November 12, 2013 at 6:35 AMView Comments

Dan Geer Explains the Government Surveillance Mentality

This talk by Dan Geer explains the NSA mindset of “collect everything”:

I previously worked for a data protection company. Our product was, and I believe still is, the most thorough on the market. By “thorough” I mean the dictionary definition, “careful about doing something in an accurate and exact way.” To this end, installing our product instrumented every system call on the target machine. Data did not and could not move in any sense of the word “move” without detection. Every data operation was caught and monitored. It was total surveillance data protection. Its customers were companies that don’t accept half-measures. What made this product stick out was that very thoroughness, but here is the point: Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.

The alternative to total surveillance of data handling is to answer more narrow questions, questions like “Can the user steal data with a USB stick?” or “Does this outbound e-mail have a Social Security Number in it?” Answering direct questions is exactly what a defensive mindset says you must do, and that is “never make the same mistake twice.” In other words, if someone has lost data because of misuse of some facility on the computer, then you either disable that facility or you wrap it in some kind of perimeter. Lather, rinse, and repeat. This extends all the way to such trivial matters as timer-based screen locking.

The difficulty with the defensive mindset is that it leaves in place the fundamental strategic asymmetry of cybersecurity, namely that while the workfactor for the offender is the price of finding a new method of attack, the workfactor for the defender is the cumulative cost of forever defending against all attack methods yet discovered. Over time, the curve for the cost of finding a new attack and the curve for the cost of defending against all attacks to date cross. Once those curves cross, the offender never has to worry about being out of the money. I believe that that crossing occurred some time ago.

The total surveillance strategy is, to my mind, an offensive strategy used for defensive purposes. It says “I don’t know what the opposition is going to try, so everything is forbidden unless we know it is good.” In that sense, it is like whitelisting applications. Taking either the application whitelisting or the total data surveillance approach is saying “That which is not permitted is forbidden.”

[…]

We all know the truism, that knowledge is power. We all know that there is a subtle yet important distinction between information and knowledge. We all know that a negative declaration like “X did not happen” can only proven true if you have the enumeration of *everything* that did happen and can show that X is not in it. We all know that when a President says “Never again” he is asking for the kind of outcome for which proving a negative, lots of negatives, is categorically essential. Proving a negative requires omniscience. Omniscience requires god-like powers.

The whole essay is well worth reading.

Posted on November 11, 2013 at 6:21 AMView Comments

Why the Government Should Help Leakers

In the Information Age, it’s easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.

When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.

Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing. It might seem abhorrent to help those who are seeking to bring your secrets to light, but it’s the best way to ensure that the things that truly need to be secret remain secret, even as everything else becomes public.

The WikiLeaks cables serve as an excellent example of how a government should not deal with massive leaks of classified information.

WikiLeaks has said it asked US authorities for help in determining what should be redacted before publication of documents, although some government officials have challenged that statement. WikiLeaks’ media partners did redact many documents, but eventually all 250,000 unredacted cables were released to the world as a result of a mistake.

The damage was nowhere near as serious as government officials initially claimed, but it had been avoidable.

Fast-forward to today, and we have an even bigger trove of classified documents. What Edward Snowden took—”exfiltrated” is the National Security Agency term—dwarfs the State Department cables, and contains considerably more important secrets. But again, the US government is doing nothing to prevent a massive data dump.

The government engages with the press on individual stories. The Guardian, the Washington Post, and the New York Times are all redacting the original Snowden documents based on discussions with the government. This isn’t new. The US press regularly consults with the government before publishing something that might be damaging. In 2006, the New York Times consulted with both the NSA and the Bush administration before publishing Mark Klein’s whistle-blowing about the NSA’s eavesdropping on AT&T trunk circuits. In all these cases, the goal is to minimize actual harm to US security while ensuring the press can still report stories in the public interest, even if the government doesn’t want it to.

In today’s world of reduced secrecy, whistleblowing as civil disobedience, and massive document exfiltrations, negotiations over individual stories aren’t enough. The government needs to develop a protocol to actively help news organizations expose their secrets safely and responsibly.

Here’s what should have happened as soon as Snowden’s whistle-blowing became public. The government should have told the reporters and publications with the classified documents something like this: “OK, you have them. We know that we can’t undo the leak. But please let us help. Let us help you secure the documents as you write your stories, and securely dispose of the documents when you’re done.”

The people who have access to the Snowden documents say they don’t want them to be made public in their raw form or to get in the hands of rival governments. But accidents happen, and reporters are not trained in military secrecy practices.

Copies of some of the Snowden documents are being circulated to journalists and others. With each copy, each person, each day, there’s a greater chance that, once again, someone will make a mistake and some—or all—of the raw documents will appear on the Internet. A formal system of working with whistle-blowers could prevent that.

I’m sure the suggestion sounds odious to a government that is actively engaging in a war on whistle-blowers, and that views Snowden as a criminal and the reporters writing these stories as “helping the terrorists.” But it makes sense. Harvard law professor Jonathan Zittrain compares this to plea bargaining.

The police regularly negotiate lenient sentences or probation for confessed criminals in order to convict more important criminals. They make deals with all sorts of unsavory people, giving them benefits they don’t deserve, because the result is a greater good.

In the Snowden case, an agreement would safeguard the most important of NSA’s secrets from other nations’ intelligence agencies. It would help ensure that the truly secret information not be exposed. It would protect US interests.

Why would reporters agree to this? Two reasons. One, they actually do want these documents secured while they look for stories to publish. And two, it would be a public demonstration of that desire.

Why wouldn’t the government just collect all the documents under the pretense of securing them and then delete them? For the same reason they don’t renege on plea bargains: No one would trust them next time. And, of course, because smart reporters will probably keep encrypted backups under their own control.

We’re nowhere near the point where this system could be put into practice, but it’s worth thinking about how it could work. The government would need to establish a semi-independent group, called, say, a Leak Management unit, which could act as an intermediary. Since it would be isolated from the agencies that were the source of the leak, its officials would be less vested and—this is important—less angry over the leak. Over time, it would build a reputation, develop protocols that reporters could rely on. Leaks will be more common in the future, but they’ll still be rare. Expecting each agency to develop expertise in this process is unrealistic.

If there were sufficient trust between the press and the government, this could work. And everyone would benefit.

This essay previously appeared on CNN.com.

Posted on November 8, 2013 at 6:58 AMView Comments

Risk-Based Authentication

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user’s typical log-in and usage profile, taking into account their device and geographic location, the system they’re trying to access, the time of day they typically log in, their device’s IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user’s access attempt exceeds the system’s risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

Posted on November 7, 2013 at 7:06 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.