Blog: August 2013 Archives
I don't like stories about the personalities in the Snowden affair, because it detracts from the NSA and the policy issues. But I'm a sucker for operational security, and just have to post this detail from their first meeting in Hong Kong:
Snowden had instructed them that once they were in Hong Kong, they were to go at an appointed time to the Kowloon district and stand outside a restaurant that was in a mall connected to the Mira Hotel. There, they were to wait until they saw a man carrying a Rubik's Cube, then ask him when the restaurant would open. The man would answer their question, but then warn that the food was bad.
Actually, the whole article is interesting. The author is writing a book about surveillance and privacy, one of probably a half dozen about the Snowden affair that will come out this year.
EDITED TO ADD (8/31): While we're on the topic, here's some really stupid opsec on the part of Greenwald and Poitras:
- Statement from senior Cabinet Office civil servant to #miranda case says material was 58000 ‘highly classified UK intelligence documents
- Police who seized documents from #miranda found among them a piece of paper with the decryption password, the statement says
- This password allowed them to decrypt one file on his seized hard drive, adds Oliver Robbins, Cabinet Office security adviser #miranda
You can't do this kind of stuff when you're playing with the big boys.
Lavabit is -- well, was -- an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use. It was a small company, owned and operated by Ladar Levison, and it was popular among the tech-savvy. NSA whistleblower Edward Snowden among its half-million users.
Last month, Levison reportedly received an order -- probably a National Security Letter -- to allow the NSA to eavesdrop on everyone's e-mail accounts on Lavabit. Rather than "become complicit in crimes against the American people," he turned the service off. Note that we don't know for sure that he received a NSL -- that's the order authorized by the Patriot Act that doesn't require a judge's signature and prohibits the recipient from talking about it -- or what it covered, but Levison has said that he had complied with requests for individual e-mail access in the past, but this was very different.
So far, we just have an extreme moral act in the face of government pressure. It's what happened next that is the most chilling. The government threatened him with arrest, arguing that shutting down this e-mail service was a violation of the order.
There it is. If you run a business, and the FBI or NSA want to turn it into a mass surveillance tool, they believe they can do so, solely on their own initiative. They can force you to modify your system. They can do it all in secret and then force your business to keep that secret. Once they do that, you no longer control that part of your business. You can't shut it down. You can't terminate part of your service. In a very real sense, it is not your business anymore. It is an arm of the vast U.S. surveillance apparatus, and if your interest conflicts with theirs then they win. Your business has been commandeered.
For most Internet companies, this isn't a problem. They are already engaging in massive surveillance of their customers and users -- collecting and using this data is the primary business model of the Internet -- so it's easy to comply with government demands and give the NSA complete access to everything. This is what we learned from Edward Snowden. Through programs like PRISM, BLARNEY and OAKSTAR, the NSA obtained bulk access to services like Gmail and Facebook, and to Internet backbone connections throughout the US and the rest of the world. But if it were a problem for those companies, presumably the government would not allow them to shut down.
To be fair, we don't know if the government can actually convict someone of closing a business. It might just be part of their coercion tactics. Intimidation, and retaliation, is part of how the NSA does business.
Former Qwest CEO Joseph Nacchio has a story of what happens to a large company that refuses to cooperate. In February 2001 -- before the 9/11 terrorist attacks -- the NSA approached the four major US telecoms and asked for their cooperation in a secret data collection program, the one we now know to be the bulk metadata collection program exposed by Edward Snowden. Qwest was the only telecom to refuse, leaving the NSA with a hole in its spying efforts. The NSA retaliated by canceling a series of big government contracts with Qwest. The company has since been purchased by CenturyLink, which we presume is more cooperative with NSA demands.
That was before the Patriot Act and National Security Letters. Now, presumably, Nacchio would just comply. Protection rackets are easier when you have the law backing you up.
As the Snowden whistleblowing documents continue to be made public, we're getting further glimpses into the surveillance state that has been secretly growing around us. The collusion of corporate and government surveillance interests is a big part of this, but so is the government's resorting to intimidation. Every Lavabit-like service that shuts down -- and there have been several -- gives us consumers less choice, and pushes us into the large services that cooperate with the NSA. It's past time we demanded that Congress repeal National Security Letters, give us privacy rights in this new information age, and force meaningful oversight on this rogue agency.
This essay previously appeared in USA Today.
EDITED TO ADD: This essay has been translated into Danish.
Assume it's really true that the NSA has no idea what documents Snowden took, and that they wouldn't even know he'd taken anything if he hadn't gone public. The fact that abuses of their systems by NSA officers were largely discovered through self-reporting substantiates that belief.
Given that, why should anyone believe that Snowden is the first person to walk out the NSA's door with multiple gigabytes of classified documents? He might be the first to release documents to the public, but it's a reasonable assumption that the previous leakers were working for Russia, or China, or elsewhere.
New paper on the FTC and its actions to protect privacy:
Abstract: One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies' privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States -- more so than nearly any privacy statute and any common law tort.
In this article, we contend that the FTC's privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC's privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy "common law" demonstrates that the FTC's privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this "common law" into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves a full suite of substantive rules that exist independently from a company's privacy representations.
Abstract: The greatest danger to free speech on the Internet today is filtering of traffic using protocol fingerprinting. Protocols such as SSL, Tor, BitTorrent, and VPNs are being summarily blocked, regardless of their legal and ethical uses. Fortunately, it is possible to bypass this filtering by reencoding traffic into a form which cannot be correctly fingerprinted by the filtering hardware. I will be presenting a tool called Dust which provides an engine for reencoding traffic into a variety of forms. By developing a good model of how filtering hardware differentiates traffic into different protocols, a profile can be created which allows Dust to reencode arbitrary traffic to bypass the filters.
Dust is different than other approaches because it is not simply another obfuscated protocol. It is an engine which can encode traffic according to the given specifications. As the filters change their algorithms for protocol detection, rather than developing a new protocol, Dust can just be reconfigured to use different parameters. In fact, Dust can be automatically reconfigured using examples of what traffic is blocked and what traffic gets through. Using machine learning a new profile is created which will reencode traffic so that it resembles that which gets through and not that which is blocked. Dust has been created with the goal of defeating real filtering hardware currently deployed for the purpose of censoring free speech on the Internet. In this talk I will discuss how the real filtering hardware work and how to effectively defeat it.
There's an article from Wednesday's Wall Street Journal that gives more details about the NSA's data collection efforts.
The system has the capacity to reach roughly 75% of all U.S. Internet traffic in the hunt for foreign intelligence, including a wide array of communications by foreigners and Americans. In some cases, it retains the written content of emails sent between citizens within the U.S. and also filters domestic phone calls made with Internet technology, these people say.
The programs, code-named Blarney, Fairview, Oakstar, Lithium and Stormbrew, among others, filter and gather information at major telecommunications companies. Blarney, for instance, was established with AT&T Inc....
This filtering takes place at more than a dozen locations at major Internet junctions in the U.S., officials say. Previously, any NSA filtering of this kind was largely believed to be happening near points where undersea or other foreign cables enter the country.
The systems operate like this: The NSA asks telecom companies to send it various streams of Internet traffic it believes most likely to contain foreign intelligence. This is the first cut of the data. These requests don't ask for all Internet traffic. Rather, they focus on certain areas of interest, according to a person familiar with the legal process. "It's still a large amount of data, but not everything in the world," this person says.
The second cut is done by NSA. It briefly copies the traffic and decides which communications to keep based on what it calls "strong selectors"—say, an email address, or a large block of computer addresses that correspond to an organization it is interested in. In making these decisions, the NSA can look at content of communications as well as information about who is sending the data. One U.S. official says the agency doesn't itself "access" all the traffic within the surveillance system. The agency defines access as "things we actually touch," this person says, pointing out that the telecom companies do the first stage of filtering.
The surveillance system is built on relationships with telecommunications carriers that together cover about 75% of U.S. Internet communications. They must hand over what the NSA asks for under orders from the secret Foreign Intelligence Surveillance Court. The firms search Internet traffic based on the NSA's criteria, current and former officials say.
The NSA seems to have finally found a PR agency with a TS/SI clearance, since there was a response to this story. They've also had a conference call with the press, and the Director of National Intelligence is on Twitter and Tumblr.
I am completely croggled by the fact that the NSA apparently had absolutely no contingency plans for this sort of thing.
Last Sunday, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law -- the maximum time allowable without making an arrest. There has been much made of the fact that he's the partner of Glenn Greenwald, the Guardian reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There's less discussion of what I feel was the real reason for Miranda's detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information. These document were on several USB memory sticks he had with him. He had already carried documents from Greenwald in Rio de Janeiro to Poitras in Berlin, and was on his way back with different documents when he was detained.
The memory sticks were encrypted, of course, and Miranda did not know the key. This didn't stop the British authorities from repeatedly asking for the key, and from confiscating the memory sticks along with his other electronics.
The incident prompted a major outcry in the UK. The UK's Terrorist Act has always been controversial, and this clear misuse -- it was intended to give authorities the right to detain and question suspected terrorists -- is prompting new calls for its review. Certainly the UK. police will be more reluctant to misuse the law again in this manner.
I have to admit this story has me puzzled. Why would the British do something like this? What did they hope to gain, and why did they think it worth the cost? And -- of course -- were the British acting on their own under the Official Secrets Act, or were they acting on behalf of the United States? (My initial assumption was that they were acting on behalf of the US, but after the bizarre story of the British GCHQ demanding the destruction of Guardian computers last month, I'm not sure anymore.)
We do know the British were waiting for Miranda. It's reasonable to assume they knew his itinerary, and had good reason to suspect that he was ferrying documents back and forth between Greenwald and Poitras. These documents could be source documents provided by Snowden, new documents that the two were working on either separately or together, or both. That being said, it's inconceivable that the memory sticks would contain the only copies of these documents. Poitras retained copies of everything she gave Miranda. So the British authorities couldn't possibly destroy the documents; the best they could hope for is that they would be able to read them.
Is it truly possible that the NSA doesn't already know what Snowden has? They claim they don't, but after Snowden's name became public, the NSA would have conducted the mother of all audits. It would try to figure out what computer systems Snowden had access to, and therefore what documents he could have accessed. Hopefully, the audit information would give more detail, such as which documents he downloaded. I have a hard time believing that its internal auditing systems would be so bad that it wouldn't be able to discover this.
So if the NSA knows what Snowden has, or what he could have, then the most it could learn from the USB sticks is what Greenwald and Poitras are currently working on, or thinking about working on. But presumably the things the two of them are working on are the things they're going to publish next. Did the intelligence agencies really do all this simply for a few weeks' heads-up on what was coming? Given how ham-handedly the NSA has handled PR as each document was exposed, it seems implausible that it wanted advance knowledge so it could work on a response. It's been two months since the first Snowden revelation, and it still doesn't have a decent PR story.
Furthermore, the UK authorities must have known that the data would be encrypted. Greenwald might have been a crypto newbie at the start of the Snowden affair, but Poitras is known to be good at security. The two have been communicating securely by e-mail when they do communicate. Maybe the UK authorities thought there was a good chance that one of them would make a security mistake, or that Miranda would be carrying paper documents.
Another possibility is that this was just intimidation. If so, it's misguided. Anyone who regularly reads Greenwald could have told them that he would not have been intimidated -- and, in fact, he expressed the exact opposite sentiment -- and anyone who follows Poitras knows that she is even more strident in her views. Going after the loved ones of state enemies is a typically thuggish tactic, but it's not a very good one in this case. The Snowden documents will get released. There's no way to put this cat back in the bag, not even by killing the principal players.
It could possibly have been intended to intimidate others who are helping Greenwald and Poitras, or the Guardian and its advertisers. This will have some effect. Lavabit, Silent Circle, and now Groklaw have all been successfully intimidated. Certainly others have as well. But public opinion is shifting against the intelligence community. I don't think it will intimidate future whistleblowers. If the treatment of Chelsea Manning didn't discourage them, nothing will.
This leaves one last possible explanation -- those in power were angry and impulsively acted on that anger. They're lashing out: sending a message and demonstrating that they're not to be messed with -- that the normal rules of polite conduct don't apply to people who screw with them. That's probably the scariest explanation of all. Both the US and UK intelligence apparatuses have enormous money and power, and they have already demonstrated that they are willing to ignore their own laws. Once they start wielding that power unthinkingly, it could get really bad for everyone.
And it's not going to be good for them, either. They seem to want Snowden so badly that that they'll burn the world down to get him. But every time they act impulsively aggressive -- convincing the governments of Portugal and France to block the plane carrying the Bolivian president because they thought Snowden was on it is another example -- they lose a small amount of moral authority around the world, and some ability to act in the same way again. The more pressure Snowden feels, the more likely he is to give up on releasing the documents slowly and responsibly, and publish all of them at once -- the same way that WikiLeaks published the US State Department cables.
Just this week, the Wall Street Journal reported on some new NSA secret programs that are spying on Americans. It got the information from "interviews with current and former intelligence and government officials and people from companies that help build or operate the systems, or provide data," not from Snowden. This is only the beginning. The media will not be intimidated. I will not be intimidated. But it scares me that the NSA is so blind that it doesn't see it.
This essay previously appeared on TheAtlantic.com.
EDITED TO ADD: I've been thinking about it, and there's a good chance that the NSA doesn't know what Snowden has. He was a sysadmin. He had access. Most of the audits and controls protect against normal users; someone with root access is going to be able to bypass a lot of them. And he had the technical chops to cover his tracks when he couldn't just evade the auditing systems.
The AP makes an excellent point about this:
The disclosure undermines the Obama administration's assurances to Congress and the public that the NSA surveillance programs can't be abused because its spying systems are so aggressively monitored and audited for oversight purposes: If Snowden could defeat the NSA's own tripwires and internal burglar alarms, how many other employees or contractors could do the same?
And, to be clear, I didn't mean to say that intimidation wasn't the government's motive. I believe it was, and that it was poorly thought out intimidation: lashing out in anger, rather than from some Machiavellian strategy. (Here's a similar view.) If they wanted Miranda's electronics, they could have confiscated them and sent him on his way in fifteen minutes. Holding him for nine hours -- the absolute maximum they could under the current law -- was intimidation.
I am reminded of the phone call the Guardian received from British government. The exact quote reported was: "You've had your fun. Now we want the stuff back." That's something you would tell your child. And that's the power dynamic that's going on here.
EDITED TO ADD (8/27): Jay Rosen has an excellent essay on this.
Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government's security failures. Yet the debacle illustrates the challenge with trusting people in any organization.
The problem is easy to describe. Organizations require trusted people, but they don't necessarily know whether those people are trustworthy. These individuals are essential, and can also betray organizations.
So how does an organization protect itself?
Securing trusted people requires three basic mechanisms (as I describe in my book Beyond Fear). The first is compartmentalization. Trust doesn't have to be all or nothing; it makes sense to give relevant workers only the access, capabilities and information they need to accomplish their assigned tasks. In the military, even if they have the requisite clearance, people are only told what they "need to know." The same policy occurs naturally in companies.
This isn't simply a matter of always granting more senior employees a higher degree of trust. For example, only authorized armored-car delivery people can unlock automated teller machines and put money inside; even the bank president can't do so. Think of an employee as operating within a sphere of trust -- a set of assets and functions he or she has access to. Organizations act in their best interest by making that sphere as small as possible.
The idea is that if someone turns out to be untrustworthy, he or she can only do so much damage. This is where the NSA failed with Snowden. As a system administrator, he needed access to many of the agency's computer systems -- and he needed access to everything on those machines. This allowed him to make copies of documents he didn't need to see.
The second mechanism for securing trust is defense in depth: Make sure a single person can't compromise an entire system. NSA Director General Keith Alexander has said he is doing this inside the agency by instituting what is called two-person control: There will always be two people performing system-administration tasks on highly classified computers.
Defense in depth reduces the ability of a single person to betray the organization. If this system had been in place and Snowden's superior had been notified every time he downloaded a file, Snowden would have been caught well before his flight to Hong Kong.
The final mechanism is to try to ensure that trusted people are, in fact, trustworthy. The NSA does this through its clearance process, which at high levels includes lie-detector tests (even though they don't work) and background investigations. Many organizations perform reference and credit checks and drug tests when they hire new employees. Companies may refuse to hire people with criminal records or noncitizens; they might hire only those with a particular certification or membership in certain professional organizations. Some of these measures aren't very effective -- it's pretty clear that personality profiling doesn't tell you anything useful, for example -- but the general idea is to verify, certify and test individuals to increase the chance they can be trusted.
These measures are expensive. It costs the U.S. government about $4,000 to qualify someone for top-secret clearance. Even in a corporation, background checks and screenings are expensive and add considerable time to the hiring process. Giving employees access to only the information they need can hamper them in an agile organization in which needs constantly change. Security audits are expensive, and two-person control is even more expensive: it can double personnel costs. We're always making trade-offs between security and efficiency.
The best defense is to limit the number of trusted people needed within an organization. Alexander is doing this at the NSA -- albeit too late -- by trying to reduce the number of system administrators by 90 percent. This is just a tiny part of the problem; in the U.S. government, as many as 4 million people, including contractors, hold top-secret or higher security clearances. That's far too many.
More surprising than Snowden's ability to get away with taking the information he downloaded is that there haven't been dozens more like him. His uniqueness -- along with the few who have gone before him and how rare whistle-blowers are in general -- is a testament to how well we normally do at building security around trusted people.
Here's one last piece of advice, specifically about whistle-blowers. It's much harder to keep secrets in a networked world, and whistle-blowing has become the civil disobedience of the information age. A public or private organization's best defense against whistle-blowers is to refrain from doing things it doesn't want to read about on the front page of the newspaper. This may come as a shock in a market-based system, in which morally dubious behavior is often rewarded as long as it's legal and illegal activity is rewarded as long as you can get away with it.
No organization, whether it's a bank entrusted with the privacy of its customer data, an organized-crime syndicate intent on ruling the world, or a government agency spying on its citizens, wants to have its secrets disclosed. In the information age, though, it may be impossible to avoid.
This essay previously appeared on Bloomberg.com.
EDITED TO ADD 8/22: A commenter on the Bloomberg site added another security measure: pay your people more. Better paid people are less likely to betray the organization that employs them. I should have added that, especially since I make that exact point in Liars and Outliers.
Orin Kerr envisions what the ECPA should look like today:
Abstract: In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to regulate government access to Internet communications and records. ECPA is widely seen as outdated, and ECPA reform is now on the Congressional agenda. At the same time, existing reform proposals retain the structure of the 1986 Act and merely tinker with a few small aspects of the statute. This Article offers a thought experiment about what might happen if Congress repealed ECPA and enacted a new privacy statute to replace it.
The new statute would look quite different from ECPA because overlooked changes in Internet technology have dramatically altered the assumptions on which the 1986 Act was based. ECPA was designed for a network world with high storage costs and only local network access. Its design reflects the privacy threats of such a network, including high privacy protection for real-time wiretapping, little protection for non-content records, and no attention to particularity or jurisdiction. Today's Internet reverses all of these assumptions. Storage costs have plummeted, leading to a reality of almost total storage. Even United States-based services now serve a predominantly foreign customer base. A new statute would need to account for these changes.
The Article contends that a next generation privacy act should contain four features. First, it should impose the same requirement on access to all contents. Second, it should impose particularity requirements on the scope of disclosed metadata. Third, it should impose minimization rules on all accessed content. And fourth, it should impose a two-part territoriality regime with a mandatory rule structure for United States-based users and a permissive regime for users located abroad.
Structural colors rely exclusively on the density and shape of the material rather than its chemical properties. The latest research from the UCSB team shows that specialized cells in the squid skin called iridocytes contain deep pleats or invaginations of the cell membrane extending deep into the body of the cell. This creates layers or lamellae that operate as a tunable Bragg reflector. Bragg reflectors are named after the British father and son team who more than a century ago discovered how periodic structures reflect light in a very regular and predicable manner.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Interesting paper: "The Banality of Security: The Curious Case of Surveillance Cameras," by Benjamin Goold, Ian Loader, and Angélica Thumala (full paper is behind a paywall).
Abstract: Why do certain security goods become banal (while others do not)? Under what conditions does banality occur and with what effects? In this paper, we answer these questions by examining the story of closed circuit television cameras (CCTV) in Britain. We consider the lessons to be learned from CCTV’s rapid -- but puzzling -- transformation from novelty to ubiquity, and what the banal properties of CCTV tell us about the social meanings of surveillance and security. We begin by revisiting and reinterpreting the historical process through which camera surveillance has diffused across the British landscape, focusing on the key developments that encoded CCTV in certain dominant meanings (around its effectiveness, for example) and pulled the cultural rug out from under alternative or oppositional discourses. Drawing upon interviews with those who produce and consume CCTV, we tease out and discuss the family of meanings that can lead one justifiably to describe CCTV as a banal good. We then examine some frontiers of this process and consider whether novel forms of camera surveillance (such as domestic CCTV systems) may press up against the limits of banality in ways that risk unsettling security practices whose social value and utility have come to be taken for granted. In conclusion, we reflect on some wider implications of banal security and its limits.
Last weekend, a Texas couple apparently discovered that the electronic baby monitor in their children's bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child's father unplugged the monitor.
What does this mean for the rest of us? How secure are consumer electronic systems, now that they're all attached to the Internet?
The answer is not very, and it's been this bad for many years. Security vulnerabilities have been found in all types of webcams, cameras of all sorts, implanted medical devices, cars, and even smart toilets -- not to mention yachts, ATM machines, industrial control systems and military drones.
All of these things have long been hackable. Those of us who work in security are often amazed that most people don't know about it.
Why are they hackable? Because security is very hard to get right. It takes expertise, and it takes time. Most companies don't care because most customers buying security systems and smart appliances don't know enough to care. Why should a baby monitor manufacturer spend all sorts of money making sure its security is good when the average customer won't even notice?
Even worse, that consumer will look at two competing baby monitors -- a more expensive one with better security, and a cheaper one with minimal security -- and buy the cheaper. Without the expertise to make an informed buying decision, cheaper wins.
A lot of hacks happen because the users don't configure or install their devices properly, but that's really the fault of the manufacturer. These are supposed to be consumer devices, not specialized equipment for security experts only.
This sort of thing is true in other aspects of society, and we have a variety of mechanisms to deal with it. Government regulation is one of them. For example, few of us can differentiate real pharmaceuticals from snake oil, so the FDA regulates what can be sold and what sorts of claims vendors can make. Independent product testing is another. You and I might not be able to tell a well-made car from a poorly-made one at a glance, but we can both read the reports from a variety of testing agencies.
Computer security has resisted these mechanisms, both because the industry changes so quickly and because this sort of testing is hard and expensive. But the effect is that we're all being sold a lot of insecure consumer products with embedded computers. And as these computers get connected to the Internet, the problems will get worse.
The moral here isn't that your baby monitor could be hacked. The moral is that pretty much every "smart" everything can be hacked, and because consumers don't care, the market won't fix the problem.
This essay previously appeared on CNN.com. I wrote it in about half an hour, on request, and I'm not really happy with it. I should have talked more about the economics of good security, as well as the economics of hacking. The point is that we don't have to worry about hackers smart enough to figure out these vulnerabilities, but those dumb hackers who just use software tools written and distributed by the smart hackers. Ah well, next time.
There have been a bunch of articles about an information theory paper with vaguely sensational headlines like "Encryption is less secure than we thought" and "Research shakes crypto foundations." It's actually not that bad.
Basically, the researchers argue that the traditional measurement of Shannon entropy isn't the right model to use for cryptography, and that minimum entropy is. This difference may make some ciphertexts easier to decrypt, but not in ways that have practical implications in the general case. It's the same thinking that leads us to guess passwords from a dictionary rather than randomly -- because we know that humans both created the passwords and have to remember them.
This isn't news -- lots of cryptography papers make use of minimum entropy instead of Shannon entropy already -- and it's hard to see what the contribution of this paper is. Note that the paper was presented at an information theory conference, and not a cryptography conference. My guess is that there wasn't enough crypto expertise on the program committee to reject the paper.
So don't worry; cryptographic algorithms aren't going to come crumbling down anytime soon. Well, they might -- but not because of this result.
Not much surprising in this new survey.
Many teens ages 12-17 report that they usually figure out how to manage content sharing and privacy settings on their own. Focus group interviews with teens suggest that for their day-to-day privacy management, teens are guided through their choices in the app or platform when they sign up, or find answers through their own searching and use of their preferred platform.
At the same time, though, a nationally representative survey of teen internet users shows that, at some point, 70% of them have sought advice from someone else about how to manage their privacy online. When they do seek outside help, teens most often turn to friends, parents or other close family members.
There was a presentation at Black Hat last month warning us of a "factoring cryptopocalypse": a moment when factoring numbers and solving the discrete log problem become easy, and both RSA and DH break. This presentation was provocative, and has generated a lot of commentary, but I don't see any reason to worry.
Yes, breaking modern public-key cryptosystems has gotten easier over the years. This has been true for a few decades now. Back in 1999, I wrote this about factoring:
Factoring has been getting easier. It's been getting easier faster than anyone has anticipated. I see four reasons why this is so:
- Computers are getting faster.
- Computers are better networked.
- The factoring algorithms are getting more efficient.
- Fundamental advances in mathematics are giving us better factoring algorithms.
I could have said the same thing about the discrete log problem. And, in fact, advances in solving one problem tend to mirror advances in solving the other.
The reasons are arrayed in order of unpredictability. The first two -- advances in computing and networking speed -- basically follow Moore's Law (and others), year after year. The third comes in regularly, but in fits and starts: a 2x improvement here, a 10x improvement there. It's the fourth that's the big worry. Fundamental mathematical advances only come once in a while, but when they do come, the effects can be huge. If factoring ever becomes "easy" such that RSA is no longer a viable cryptographic algorithm, it will be because of this sort of advance.
The authors base their current warning on some recent fundamental advances in solving the discrete log problem, but the work doesn't generalize to the types of numbers used for cryptography. And they're not going to generalize; the result is simply specialized.
This isn't to say that solving these problems won't continue to get easier, but so far it has been trivially easy to increase key lengths to stay ahead of the advances. I expect this to remain true for the foreseeable future.
I made the list of Wired's best "Government and Security" blogs.
Terrorist organizations have the same management problems as other organizations, and new ones besides:
Terrorist leaders also face a stubborn human resources problem: Their talent pool is inherently unstable. Terrorists are obliged to seek out recruits who are predisposed to violence -- that is to say, young men with a chip on their shoulder. Unsurprisingly, these recruits are not usually disposed to following orders or recognizing authority figures. Terrorist managers can craft meticulous long-term strategies, but those are of little use if the people tasked with carrying them out want to make a name for themselves right now.
Terrorist managers are also obliged to place a premium on bureaucratic control, because they lack other channels to discipline the ranks. When Walmart managers want to deal with an unruly employee or a supplier who is defaulting on a contract, they can turn to formal legal procedures. Terrorists have no such option. David Ervine, a deceased Irish Unionist politician and onetime bomb maker for the Ulster Volunteer Force (UVF), neatly described this dilemma to me in 2006. "We had some very heinous and counterproductive activities being carried out that the leadership didn't punish because they had to maintain the hearts and minds within the organization," he said....
It turns out that the NSA's domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we've learned, fight and lose. Others cooperate, either out of patriotism or because they believe it's easier that way.
I have one message to the executives of those companies: fight.
Do you remember those old spy movies, when the higher ups in government decide that the mission is more important than the spy's life? It's going to be the same way with you. You might think that your friendly relationship with the government means that they're going to protect you, but they won't. The NSA doesn't care about you or your customers, and will burn you the moment it's convenient to do so.
We're already starting to see that. Google, Yahoo, Microsoft and others are pleading with the government to allow them to explain details of what information they provided in response to National Security Letters and other government demands. They've lost the trust of their customers, and explaining what they do -- and don't do -- is how to get it back. The government has refused; they don't care.
It will be the same with you. There are lots more high-tech companies who have cooperated with the government. Most of those company names are somewhere in the thousands of documents that Edward Snowden took with him, and sooner or later they'll be released to the public. The NSA probably told you that your cooperation would forever remain secret, but they're sloppy. They'll put your company name on presentations delivered to thousands of people: government employees, contractors, probably even foreign nationals. If Snowden doesn't have a copy, the next whistleblower will.
This is why you have to fight. When it becomes public that the NSA has been hoovering up all of your users' communications and personal files, what's going to save you in the eyes of those users is whether or not you fought. Fighting will cost you money in the short term, but capitulating will cost you more in the long term.
Already companies are taking their data and communications out of the US.
The extreme case of fighting is shutting down entirely. The secure e-mail service Lavabit did that last week, abruptly. Ladar Levison, that site's owner, wrote on his homepage: "I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision."
The same day, Silent Circle followed suit, shutting down their e-mail service in advance of any government strong-arm tactics: "We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now." I realize that this is extreme. Both of those companies can do it because they're small. Google or Facebook couldn't possibly shut themselves off rather than cooperate with the government. They're too large; they're public. They have to do what's economically rational, not what's moral.
But they can fight. You, an executive in one of those companies, can fight. You'll probably lose, but you need to take the stand. And you might win. It's time we called the government's actions what they really are: commandeering. Commandeering is a practice we're used to in wartime, where commercial ships are taken for military use, or production lines are converted to military production. But now it's happening in peacetime. Vast swaths of the Internet are being commandeered to support this surveillance state.
If this is happening to your company, do what you can to isolate the actions. Do you have employees with security clearances who can't tell you what they're doing? Cut off all automatic lines of communication with them, and make sure that only specific, required, authorized acts are being taken on behalf of government. Only then can you look your customers and the public in the face and say that you don't know what is going on -- that your company has been commandeered.
Journalism professor Jeff Jarvis recently wrote in the Guardian: "Technology companies: now is the moment when you must answer for us, your users, whether you are collaborators in the US government's efforts to 'collect it all -- our every move on the internet -- or whether you, too, are victims of its overreach."
So while I'm sure it's cool to have a secret White House meeting with President Obama -- I'm talking to you, Google, Apple, AT&T, and whoever else was in the room -- resist. Attend the meeting, but fight the secrecy. Whose side are you on?
The NSA isn't going to remain above the law forever. Already public opinion is changing, against the government and their corporate collaborators. If you want to keep your users' trust, demonstrate that you were on their side.
This essay originally appeared on TheAtlantic.com.
I can't believe this was published ten days ago, and I'm only just finding out about it. Aren't all you people supposed to be sending me links of things I might be interested in?
This essay is filled with historical MI5 stories -- often bizarre, sometimes amusing. My favorite:
It was recently revealed that back in the 1970s -- at the height of the obsession with traitors -- MI5 trained a specially bred group of Gerbils to detect spies. Gerbils have a very acute sense of smell and they were used in interrogations to tell whether the suspects were releasing adrenaline -- because that would show they were under stress and lying.
Then they tried the Gerbils to see if they could detect terrorists who were about to carry a bomb onto a plane. But the gerbils got confused because they couldn't tell the difference between the terrorists and ordinary people who were frightened of flying who were also pumping out adrenaline in their sweat.
So the gerbils failed as well.
Rangzen looks like a really interesting ad hoc mesh networking system to circumvent government-imposed communications blackouts. I am particularly interested in how it uses reputation to determine who can be trusted, while maintaining some level of anonymity.
Abstract: A challenging problem in dissent networking is that of circumventing large-scale communication blackouts imposed by oppressive governments. Although prior work has not focused on the need for user anonymity, we contend that it is essential. Without anonymity, governments can use communication networks to track and persecute users. A key challenge for decentralized networks is that of resource allocation and control. Network resources must be shared in a manner that deprioritizes unwanted traffic and abusive users. This task is typically addressed through reputation systems that conflict with anonymity. Our work addresses this paradox: We prioritize resources in a privacy-preserving manner to create an attack-resilient, anonymity-preserving, mobile ad-hoc network. Our prioritization mechanism exploits the properties of a social trust graph to promote messages relayed via trusted nodes. We present Rangzen, a microblogging solution that uses smartphones to opportunistically relay messages among citizens in a delay-tolerant network (DTN) that is independent of government or corporate-controlled infrastructure.
This is exactly the sort of thing I was thinking about in this essay.
Rise of the Warrior Cop: The Militarization of America's Police Forces, by Radley Balko, PublicAffairs, 2013, 400 pages.
War as a rhetorical concept is firmly embedded in American culture. Over the past several decades, federal and local law enforcement has been enlisted in a war on crime, a war on drugs and a war on terror. These wars are more than just metaphors designed to rally public support and secure budget appropriations. They change the way we think about what the police do. Wars mean shooting first and asking questions later. Wars require military tactics and weaponry. Wars mean civilian casualties.
Over the decades, the war metaphor has resulted in drastic changes in the way the police operate. At both federal and state levels, the formerly hard line between police and military has blurred. Police are increasingly using military weaponry, employing military tactics and framing their mission using military terminology. Right now, there is a Third Amendment case -- that's the one about quartering soldiers in private homes without consent -- making its way through the courts. It involves someone who refused to allow the police to occupy his home in order to gain a "tactical advantage" against the house next-door. The police returned later, broke down his door, forced him to the floor and then arrested him for obstructing an officer. They also shot his dog with pepperball rounds. It's hard to argue with the premise of this case; police officers are acting so much like soldiers that it can be hard to tell the difference.
In Rise of the Warrior Cop, Radley Balko chronicles the steady militarization of the police in the U.S. A detailed history of a dangerous trend, Mr. Balko's book tracks police militarization over the past 50 years, a period that not coincidentally corresponds with the rise of SWAT teams. First established in response to the armed riots of the late 1960s, they were originally exclusive to big cities and deployed only against heavily armed and dangerous criminals. Today SWAT teams are nothing special. They've multiplied like mushrooms. Every city has a SWAT team; 80% of towns between 25,000 and 50,000 people do as well. These teams are busy; in 2005 there were between 50,000 and 60,000 SWAT raids in the U.S. The tactics are pretty much what you would expect -- breaking down doors, rushing in with military weaponry, tear gas -- but the targets aren't. SWAT teams are routinely deployed against illegal poker games, businesses suspected of employing illegal immigrants and barbershops with unlicensed hair stylists.
In Prince George's County, MD, alone, SWAT teams were deployed about once a day in 2009, overwhelmingly to serve search or arrest warrants, and half of those warrants were for "misdemeanors and nonserious felonies." Much of Mr. Balko's data is approximate, because police departments don't publish data, and they uniformly oppose any attempts at transparency or oversight. But he has good Maryland data from 2009 on, because after the mayor of Berwyn Heights was mistakenly attacked and terrorized in his home by a SWAT team in 2008, the state passed a law requiring police to report quarterly on their use of SWAT teams: how many times, for what purposes and whether any shots were fired during the raids.
Besides documenting policy decisions at the federal and state levels, the author examines the influence of military contractors who have looked to expand into new markets. And he tells some pretty horrific stories of SWAT raids gone wrong. A lot of dogs get shot in the book. Most interesting are the changing attitudes of police. As the stories progress from the 1960s to the 2000s, we see police shift from being uncomfortable with military weapons and tactics -- and deploying them only as the very last resort in the most extreme circumstances -- to accepting and even embracing their routine use.
This development coincides with the rhetorical use of the word "war." To the police, civilians are citizens to protect. To the military, we are a population to be subdued. Wars can temporarily override the Constitution. When the Justice Department walks into Congress with requests for money and new laws to fight a war, it is going to get a different response than if it came in with a story about fighting crime. Maybe the most chilling quotation in the book is from William French Smith, President Reagan's first attorney general: "The Justice Department is not a domestic agency. It is the internal arm of national defense." Today we see that attitude in the war on terror. Because it's a war, we can arrest and imprison Americans indefinitely without charges. We can eavesdrop on the communications of all Americans without probable cause. We can assassinate American citizens without due process. We can have secret courts issuing secret rulings about secret laws. The militarization of the police is just one aspect of an increasing militarization of government.
Mr. Balko saves his prescriptions for reform until the last chapter. Two of his fixes, transparency and accountability, are good remedies for all governmental overreach. Specific to police departments, he also recommends halting mission creep, changing police culture and embracing community policing. These are far easier said than done. His final fix is ending the war on drugs, the source of much police violence. To this I would add ending the war on terror, another rhetorical war that costs us hundreds of billions of dollars, gives law enforcement powers directly prohibited by the Constitution and leaves us no safer.
This essay originally appeared in the Wall Street Journal.
General Keith Alexander thinks he can improve security by automating sysadmin duties such that 90% of them can be fired:
Using technology to automate much of the work now done by employees and contractors would make the NSA's networks "more defensible and more secure," as well as faster, he said at the conference, in which he did not mention Snowden by name.
Does anyone know a sysadmin anywhere who believes it's possible to automate 90% of his job? Or who thinks any such automation will actually improve security?
He's stuck. Computerized systems require trusted people to administer them. And any agency with all that computing power is going to need thousands of sysadmins. Some of them are going to be whistleblowers.
Leaking secret information is the civil disobedience of our age. Alexander has to get used to it.
Lots of sports stadiums have instituted Draconian new rules. Here are the rules for St. Louis Rams games:
Fans will be able to carry the following style and size bag, package, or container at stadium plaza areas, stadium gates, or when approaching queue lines of fans awaiting entry into the stadium:
- Bags that are clear plastic, vinyl or PVC and do not exceed 12” x 6” x 12.” (Official NFL team logo clear plastic tote bags are available through club merchandise outlets or at nflshop.com), or
- One-gallon clear plastic freezer bag (Ziploc bag or similar).
- Small clutch bags, approximately the size of a hand, with or without a handle or strap, may be carried into the stadium along with one of the clear bag options.
- An exception will be made for medically necessary items after proper inspection at a gate designated for this purpose.
Prohibited items include, but are not limited to: purses larger than a clutch bag, coolers, briefcases, backpacks, fanny packs, cinch bags, luggage of any kind, seat cushions, computer bags and camera bags or any bag larger than the permissible size.
Of course you're supposed to think this is about terrorism. My guess is that this is to help protect the security of the profits at the concession stands.
Lavabit, the more-secure e-mail service that Edward Snowden -- among others -- used, has abruptly shut down. From the message on their homepage:
I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot....
This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States.
In case something happens to the homepage, the full message is recorded here.
Also yesterday, Silent Circle shut down its email service:
We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now.
This illustrates the difference between a business owned by a person, and a public corporation owned by shareholders. Ladar Levison can decide to shutter Lavabit -- a move that will personally cost him money -- because he believes it's the right thing to do. I applaud that decision, but it's one he's only able to make because he doesn't have to answer to public shareholders. Could you imagine what would happen if Mark Zuckerberg or Larry Page decided to shut down Facebook or Google rather than answer National Security Letters? They couldn't. They would be fired.
When the small companies can no longer operate, it's another step in the consolidation of the surveillance society.
It's being reported, although there's no indication of where this rumor is coming from or what it's based on.
...the new tactic allows terrorists to dip ordinary clothing into the liquid to make the clothes themselves into explosives once dry.
"It's ingenious," one of the officials said.
Another senior official said that the tactic would not be detected by current security measures.
I can see the trailer now. "In a world where your very clothes might explode at any moment, Bruce Willis is, Bruce Willis in a Michael Bay film: BLOW UP! Co-starring Lindsay Lohan..."
I guess there's nothing to be done but to force everyone to fly naked.
Twitter just rolled out a pretty nice two-factor authentication system using your smart phone as the second factor:
The new two-factor system works like this. A user enrolls using the mobile app, which generates a 2048-bit RSA keypair. The private key lives on the phone itself, and the public key is uploaded to Twitter’s server.
When Twitter receives a new login request with a username and password, the server sends a challenge based on a 190-bit, 32 character random nonce, to the mobile app -- along with a notification that gives the user the time, location, and browser information associated with the login request. The user can then opt to approve or deny this login request. If approved, the app replies to a challenge with its private key, relays that information back to the server. The server compares that challenge with a request ID, and if it authenticates, the user is automatically logged in.
On the user end, this means there’s no string of numbers to enter, nor do you have to swap to a third party authentication app or carrier. You just use the Twitter client itself. It means that the system isn’t vulnerable to a compromised SMS delivery channel, and moreover, it’s easy.
In July 2012, responding to allegations that the video-chat service Skype -- owned by Microsoft -- was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company's blog to deny it.
Turns out that wasn't quite true.
Or at least he -- or the company's lawyers -- carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn't changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.
At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn't collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word "collect," an excuse that didn't even pass the laugh test.
As Edward Snowden's documents reveal more about the NSA's activities, it's becoming clear that we can't trust anything anyone official says about these programs.
Apple says it's never heard of PRISM. Of course not; that's the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they've received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama's justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.
NSA Director Gen. Keith Alexander has claimed that the NSA's massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of "helped." We're not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of "terrorist plots." An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.
Left alone, few were likely to have accomplished much of anything.
Both government agencies and corporations have cloaked themselves in so much secrecy that it's impossible to verify anything they say; revelation after revelation demonstrates that they've been lying to us regularly and tell the truth only when there's no alternative.
There's much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden's files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.
Ronald Reagan once said "trust but verify." That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It's no wonder that most people are ignoring the story; it's just too much cognitive dissonance to try to cope with it.
This sort of thing can destroy our country. Trust is essential in our society. And if we can't trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.
Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.
Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government's actions.
Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it's probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.
This is the only way we can restore trust. A market economy doesn't work unless consumers can make intelligent buying decisions based on accurate product information. That's why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.
In the same way, democracy can't work unless voters know what the government is doing in their name. That's why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.
Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I'm expected to know which companies protect their users' privacy and which encryption programs the NSA can't break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It's a lousy answer, but until our government starts down the path of regaining our trust, it's the only thing we can do.
This essay originally appeared on CNN.com.
Last month, I wrote about the potential for mass surveillance mission creep: the tendency for the vast NSA surveillance apparatus to be used for other, lesser, crimes. My essay was theoretical, but it turns out to be already happening.
Other agencies are already asking to use the NSA data:
Agencies working to curb drug trafficking, cyberattacks, money laundering, counterfeiting and even copyright infringement complain that their attempts to exploit the security agency’s vast resources have often been turned down because their own investigations are not considered a high enough priority, current and former government officials say.
The Drug Enforcement Agency is already using this data, and lying about it:
A secretive U.S. Drug Enforcement Administration unit is funneling information from intelligence intercepts, wiretaps, informants and a massive database of telephone records to authorities across the nation to help them launch criminal investigations of Americans.
Although these cases rarely involve national security issues, documents reviewed by Reuters show that law enforcement agents have been directed to conceal how such investigations truly begin -- not only from defense lawyers but also sometimes from prosecutors and judges.
The undated documents show that federal agents are trained to "recreate" the investigative trail to effectively cover up where the information originated, a practice that some experts say violates a defendant's Constitutional right to a fair trial. If defendants don't know how an investigation began, they cannot know to ask to review potential sources of exculpatory evidence -- information that could reveal entrapment, mistakes or biased witnesses.
I find that "some experts say" bit funny. I suppose it's Reuters' way of pretending there's balance.
This is really bad. The surveillance state is closer than most of us think.
Imagine the government passed a law requiring all citizens to carry a tracking device. Such a law would immediately be found unconstitutional. Yet we all carry mobile phones.
If the National Security Agency required us to notify it whenever we made a new friend, the nation would rebel. Yet we notify Facebook. If the Federal Bureau of Investigation demanded copies of all our conversations and correspondence, it would be laughed at. Yet we provide copies of our e-mail to Google, Microsoft or whoever our mail host is; we provide copies of our text messages to Verizon, AT&T and Sprint; and we provide copies of other conversations to Twitter, Facebook, LinkedIn, or whatever other site is hosting them.
The primary business model of the Internet is built on mass surveillance, and our government's intelligence-gathering agencies have become addicted to that data. Understanding how we got here is critical to understanding how we undo the damage.
Computers and networks inherently produce data, and our constant interactions with them allow corporations to collect an enormous amount of intensely personal data about us as we go about our daily lives. Sometimes we produce this data inadvertently simply by using our phones, credit cards, computers and other devices. Sometimes we give corporations this data directly on Google, Facebook, Apple Inc.'s iCloud and so on in exchange for whatever free or cheap service we receive from the Internet in return.
The NSA is also in the business of spying on everyone, and it has realized it's far easier to collect all the data from these corporations rather than from us directly. In some cases, the NSA asks for this data nicely. In other cases, it makes use of subtle threats or overt pressure. If that doesn't work, it uses tools like national security letters.
The result is a corporate-government surveillance partnership, one that allows both the government and corporations to get away with things they couldn't otherwise.
There are two types of laws in the U.S., each designed to constrain a different type of power: constitutional law, which places limitations on government, and regulatory law, which constrains corporations. Historically, these two areas have largely remained separate, but today each group has learned how to use the other's laws to bypass their own restrictions. The government uses corporations to get around its limits, and corporations use the government to get around their limits.
This partnership manifests itself in various ways. The government uses corporations to circumvent its prohibitions against eavesdropping domestically on its citizens. Corporations rely on the government to ensure that they have unfettered use of the data they collect.
Here's an example: It would be reasonable for our government to debate the circumstances under which corporations can collect and use our data, and to provide for protections against misuse. But if the government is using that very data for its own surveillance purposes, it has an incentive to oppose any laws to limit data collection. And because corporations see no need to give consumers any choice in this matter -- because it would only reduce their profits -- the market isn't going to protect consumers, either.
Our elected officials are often supported, endorsed and funded by these corporations as well, setting up an incestuous relationship between corporations, lawmakers and the intelligence community.
The losers are us, the people, who are left with no one to stand up for our interests. Our elected government, which is supposed to be responsible to us, is not. And corporations, which in a market economy are supposed to be responsive to our needs, are not. What we have now is death to privacy—and that's very dangerous to democracy and liberty.
The simple answer is to blame consumers, who shouldn't use mobile phones, credit cards, banks or the Internet if they don't want to be tracked. But that argument deliberately ignores the reality of today's world. Everything we do involves computers, even if we're not using them directly. And by their nature, computers produce tracking data. We can't go back to a world where we don't use computers, the Internet or social networking. We have no choice but to share our personal information with these corporations, because that's how our world works today.
Curbing the power of the corporate-private surveillance partnership requires limitations on both what corporations can do with the data we choose to give them and restrictions on how and when the government can demand access to that data. Because both of these changes go against the interests of corporations and the government, we have to demand them as citizens and voters. We can lobby our government to operate more transparently -- disclosing the opinions of the Foreign Intelligence Surveillance Court would be a good start -- and hold our lawmakers accountable when it doesn't. But it's not going to be easy. There are strong interests doing their best to ensure that the steady stream of data keeps flowing.
This essay originally appeared on Bloomberg.com.
The Guardian discusses a new secret NSA program: XKeyscore. It's the desktop system that allows NSA agents to spy on anyone over the Internet in real time. It searches existing NSA databases -- presumably including PRISM -- and can create fingerprints to search for all future data collections from systems like TRAFFIC THIEF. This seems to be what Edward Snowden meant when he said that he had the ability to spy on any American, in real time, from his deck.
In related news, this essay explains how "three-hop" analysis of the communications of suspected terrorists means that everyone in the US is spied on.
EDITED TO ADD (8/3): The math is wrong in that three-hop analysis essay. Apologies.
This seems not to be the NSA eavesdropping on everyone's Internet traffic, as was first assumed. It was one of those "see something, say something" amateur tips:
Suffolk County Criminal Intelligence Detectives received a tip from a Bay Shore based computer company regarding suspicious computer searches conducted by a recently released employee. The former employee's computer searches took place on this employee's workplace computer. On that computer, the employee searched the terms "pressure cooker bombs" and "backpacks."
EDITED TO ADD (8/2): Another article.
EDITED TO ADD (8/3): As more of the facts come out, this seems like less of an overreaction than I first thought. The person was an ex-employee of the company -- not an employee -- and was searching "pressure cooker bomb." It's not unreasonable for the company to call the police in that case, and for the police to investigate the searcher. Whether or not the employer should be monitoring Internet use is another matter.
The UK has banned researchers from revealing details of security vulnerabilities in car locks. In 2008, Phillips brought a similar suit against researchers who broke the Mifare chip. That time, they lost. This time, Volkswagen sued and won.
This is bad news for security researchers. (Remember back in 2001 when security researcher Ed Felten sued the RIAA in the US to be able to publish his research results?) We're not going to improve security unless we're allowed to publish our results. And we can't start suppressing scientific results, just because a big corporation doesn't like what it does to their reputation.
EDITED TO ADD (8/14): Here's the ruling.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.