Blog: July 2013 Archives

Neighborhood Security: Feeling vs. Reality

Research on why some neighborhoods feel safer:

Salesses and collaborators Katja Schechtner and César A. Hidalgo built an online comparison tool using Google Street View images to identify these often unseen triggers of our perception of place. Have enough people compare paired images of streets in New York or Boston, for instance, for the scenes that look more “safe” or “upper-class,” and eventually some patterns start to emerge.

“We found images with trash in it, and took the trash out, and we noticed a 30 percent increase in perception of safety,” Salesses says. “It’s surprising that something that easy had that large an effect.”

This also means some fairly cost-effective government interventions ­—collecting trash—could have a significant impact on how safe people feel in a neighborhood. “It’s like bringing a data source to something that’s always been subjective,” Salesses says.

I’ve written about the feeling and reality of security, and how they’re different. (That’s also the subject of this TEDx talk.) Yes, it’s security theater: things that make a neighborhood feel safer rather than actually safer. But when the neighborhood is actually safer than people think it is, this sort of security theater has value.

Original paper.

EDITED TO ADD (8/14): Two related links.

Posted on July 30, 2013 at 1:44 PM28 Comments

Really Clever Bank Card Fraud

This is a really clever social engineering attack against a bank-card holder:

It all started, according to the police, on the Saturday night where one of this gang will have watched me take money from the cash point. That’s the details of my last transaction taken care of. Sinister enough, the thought of being spied on while you’re trying to enjoy yourself at a garage night at the Buffalo Bar, but not the worst of it.

The police then believe I was followed home, which is how they got my address.

As for the call: well, credit where it’s due, it’s pretty clever. If you call a landline it’s up to you to end the call. If the other person, the person who receives the call, puts down the receiver, it doesn’t hang up, meaning that when I attempted to hang up to go and find my bank card, the fraudster was still on the other end, waiting for me to pick up the phone and call “the bank”. As I did this, he played a dial tone down the line, and then a ring tone, making me think it was a normal call.

I thought this phone trick doesn’t work any more. It doesn’t work at my house—I just tried it. Maybe it still works in much of the UK.

Posted on July 30, 2013 at 7:33 AM37 Comments

Obama's Continuing War Against Leakers

The Obama Administration has a comprehensive “insider threat” program to detect leakers from within government. This is pre-Snowden. Not surprisingly, the combination of profiling and “see something, say something” is unlikely to work.

In an initiative aimed at rooting out future leakers and other security violators, President Barack Obama has ordered federal employees to report suspicious actions of their colleagues based on behavioral profiling techniques that are not scientifically proven to work, according to experts and government documents.

The techniques are a key pillar of the Insider Threat Program, an unprecedented government-wide crackdown under which millions of federal bureaucrats and contractors must watch out for “high-risk persons or behaviors” among co-workers. Those who fail to report them could face penalties, including criminal charges.

Another critique.

Posted on July 29, 2013 at 6:28 AM61 Comments

Secret Information Is More Trusted

This is an interesting, if slightly disturbing, result:

In one experiment, we had subjects read two government policy papers from 1995, one from the State Department and the other from the National Security Council, concerning United States intervention to stop the sale of fighter jets between foreign countries.

The documents, both of which were real papers released through the Freedom of Information Act, argued different sides of the issue. Depending on random assignment, one was described as having been previously classified, the other as being always public. Most people in the study thought that whichever document had been “classified” contained more accurate and well-reasoned information than the public document.

In another experiment, people read a real government memo from 1978 written by members of the National Security Council about the sale of fighter jets to Taiwan; we then explained that the council used the information to make decisions. Again, depending on random assignment, some people were told that the document had been secret and for exclusive use by the council, and that it had been recently declassified under the Freedom of Information Act. Others were told that the document had always been public.

As we expected, people who thought the information was secret deemed it more useful, important and accurate than did those who thought it was public. And people judged the National Security Council’s actions based on the information as more prudent and wise when they believed the document had been secret.

[…]

Our study helps explain the public’s support for government intelligence gathering. A recent poll by the Pew Research Center for the People and the Press reported that a majority of Americans thought it was acceptable for the N.S.A. to track Americans’ phone activity to investigate terrorism. Some frustrated commentators have concluded that Americans have much less respect for their own privacy than they should.

But our research suggests another conclusion: the secret nature of the program itself may lead the public to assume that the information it gathers is valuable, without even examining what that information is or how it might be used.

Original paper abstract; the full paper is behind a paywall.

Posted on July 26, 2013 at 6:25 AM42 Comments

Details on NSA/FBI Eavesdropping

We’re starting to see Internet companies talk about the mechanics of how the US government spies on their users. Here, a Utah ISP owner describes his experiences with NSA eavesdropping:

We had to facilitate them to set up a duplicate port to tap in to monitor that customer’s traffic. It was a 2U (two-unit) PC that we ran a mirrored ethernet port to.

[What we ended up with was] a little box in our systems room that was capturing all the traffic to this customer. Everything they were sending and receiving.

Declan McCullagh explains how the NSA coerces companies to cooperate with its surveillance efforts. Basically, they want to avoid what happened with the Utah ISP.

Some Internet companies have reluctantly agreed to work with the government to conduct legally authorized surveillance on the theory that negotiations are less objectionable than the alternative—federal agents showing up unannounced with a court order to install their own surveillance device on a sensitive internal network. Those devices, the companies fear, could disrupt operations, introduce security vulnerabilities, or intercept more than is legally permitted.

“Nobody wants it on-premises,” said a representative of a large Internet company who has negotiated surveillance requests with government officials. “Nobody wants a box in their network…[Companies often] find ways to give tools to minimize disclosures, to protect users, to keep the government off the premises, and to come to some reasonable compromise on the capabilities.”

Precedents were established a decade or so ago when the government obtained legal orders compelling companies to install custom eavesdropping hardware on their networks.

And Brewster Kahle of the Internet Archive explains how he successfully fought a National Security Letter.

Posted on July 25, 2013 at 12:27 PM29 Comments

Michael Hayden on the Effects of Snowden's Whistleblowing

Former NSA director Michael Hayden lists three effects of the Snowden documents:

  1. “…the undeniable operational effect of informing adversaries of American intelligence’s tactics, techniques and procedures.”
  2. “…the undeniable economic punishment that will be inflicted on American businesses for simply complying with American law.”
  3. “…the erosion of confidence in the ability of the United States to do anything discreetly or keep anything secret.”

It’s an interesting list, and one that you’d expect from a NSA person. Actually, the whole essay is about what you’d expect from a former NSA person.

My reactions:

  1. This, I agree, is actual damage. From what I can tell, Snowden has done his best to minimize it. And both the Guardian and the Washington Post refused to publish materials he provided, out of concern for US national security. Hayden believes that both the Chinese and the Russians have Snowden’s entire trove of documents, but I’m less convinced. Everyone is acting under the assumption that the NSA has compromised everything, which is probably a good assumption.
  2. Hayden has it backwards—this is good. I hope that companies that have cooperated with the NSA are penalized in the market. If we are to expect the market to solve any of this, we need the cost of cooperating to be greater than the cost of fighting. If we as consumers punish companies that have complied with the NSA, they’ll be less likely to roll over next time.
  3. In the long run, this might turn out to be a good thing, too. In the Internet age, secrecy is a lot harder to maintain. The countries that figure this out first will be the countries that do well in the coming decades.

And, of course, Hayden lists his “costs” without discussing the benefits. Exposing secret government overreach, a secret agency gone rogue, and a secret court that’s failing in its duties are enormously beneficial. Snowden has blown a whistle that long needed blowing—it’s the only way can ever hope to fix this. And Hayden completely ignores the very real question as to whether these enormous NSA data-collection programs provide any real benefits.

I’m also tired of this argument:

But it takes a special kind of arrogance for this young man to believe that his moral judgment on the dilemma suddenly trumps that of two (incredibly different) presidents, both houses of the U.S. Congress, both political parties, the U.S. court system and more than 30,000 of his co-workers.

It’s like President Obama claiming that the NSA programs are “transparent” because they were cleared by a secret court that only ever sees one side of the argument, or that Congress has provided oversight because a few legislators were allowed to know some of what was going on but forbidden from talking to anyone about it.

Posted on July 24, 2013 at 2:52 PM53 Comments

NSA Implements Two-Man Control for Sysadmins

In an effort to lock the barn door after the horse has escaped, the NSA is implementing two-man control for sysadmins:

NSA chief Keith Alexander said his agency had implemented a “two-man rule,” under which any system administrator like Snowden could only access or move key information with another administrator present. With some 15,000 sites to fix, Alexander said, it would take time to spread across the whole agency.

[…]

Alexander said that server rooms where such data is stored are now locked and require a two-man team to access them—safeguards that he said would be implemented at the Pentagon and intelligence agencies after a pilot at the NSA.

This kind of thing has happened before. After USN Chief Warrant Officer John Walker sold encryption keys to the Soviets, the Navy implemented two-man control for key material.

It’s an effective, if expensive, security measure—and an easy one for the NSA to implement while it figures out what it really has to do to secure information from IT insiders.

Posted on July 24, 2013 at 6:18 AM41 Comments

How the FISA Court Undermines Trust

This is a succinct explanation of how the secrecy of the FISA court undermines trust.

Surveillance types make a distinction between secrecy of laws, secrecy of procedures and secrecy of operations. The expectation is that the laws that empower or limit the government’s surveillance powers are always public. The programs built atop those laws are often secret. And the individual operations are almost always secret. As long as the public knows about and agreed to the law, the thinking goes, it’s okay for the government to build a secret surveillance architecture atop it.

But the FISA court is, in effect, breaking the first link in that chain. The public no longer knows about the law itself, and most of Congress may not know, either. The courts have remade the law, but they’ve done so secretly, without public comment or review.

Reminds me of the two types of secrecy I wrote about last month.

Posted on July 23, 2013 at 1:00 PM16 Comments

Prosecuting Snowden

I generally don’t like stories about Snowden as a person, because they distract from the real story of the NSA surveillance programs, but this article on the costs and benefits of the US government prosecuting Edward Snowden is worth reading.

Additional concerns relate to the trial. Snowden would no doubt obtain high-powered lawyers. Protesters would ring the courthouse. Journalists would camp out inside. As proceedings dragged on for months, the spotlight would remain on the N.S.A.’s spying and the administration’s pursuit of leakers. Instead of fading into obscurity, the Snowden affair would continue to grab headlines, and thus to undermine the White House’s ability to shape political discourse.

A trial could turn out to be much more than a distraction: It could be a focal point for domestic and international outrage. From the executive branch’s institutional perspective, the greatest danger posed by the Snowden case is not to any particular program. It is to the credibility of the secrecy system, and at one remove the ideal of our government as a force for good.

[…]

More broadly, Snowden’s case may clash with certain foreign policy goals. The United States often wants other countries’ dissidents to be able to find refuge abroad; this is a longstanding plank of its human rights agenda. The United States also wants illiberal regimes to tolerate online expression that challenges their authority; this is the core of its developing Internet freedom agenda.

Snowden’s prosecution may limit our soft power to lead and persuade in these areas. Of course, U.S. officials could emphasize that Snowden is different, that he’s not a courageous activist but a reckless criminal. But that is what the repressive governments say about their prisoners, too.

EDITED TO ADD (7/22): Related is this article on whether Snowden can manage to avoid arrest. Here’s the ending:

Speaking of movies, near the end of the hit film “Catch Me If You Can,” there’s a scene that Snowden might do well to watch while he’s killing time in the airport lounge (or wherever he is) pondering his fate. The young forger, Frank Abagnale, who has been staying a step ahead of the feds, finally grows irritated and fatigued. Not because they are particularly skilled in their hunting, nor because they are getting closer, but simply because they won’t give up. In a fit of pique, he blurts into the phone, “Stop chasing me!” On the other end, the dogged, bureaucratic Treasury agent, Carl Hanratty, answers, “I can’t stop. It’s my job.”

Ultimately, this is why many people who have been involved in such matters believe Snowden will be caught. Because no matter how much he may love sticking it to the U.S. government and waving the banner of truth, justice, and freedom of speech, that mission will prove largely unsustainable without serious fundraisers, organizers and dedicated allies working on his behalf for a long time.

They’ll have to make Edward Snowden their living, because those who are chasing him already have. Government agents will be paid every minute of every day for as long as it takes. Seasons may change and years may pass, but the odds say that one morning, he’ll look out of a window, go for a walk or stop for a cup of coffee, and the trap will spring shut. It will be almost like a movie.

Posted on July 22, 2013 at 1:04 PM32 Comments

Violence as a Source of Trust in Criminal Societies

This is interesting:

If I know that you have committed a violent act, and you know that I have committed a violent act, we each have information on each other that we might threaten to use if relations go sour (Schelling notes that one of the most valuable rights in business relations is the right to be sued—this is a functional equivalent).

Abstract of original paper; full paper is behind a paywall.

Posted on July 22, 2013 at 6:36 AM18 Comments

TSA Considering Implementing Randomized Security

For a change, here’s a good idea by the TSA:

TSA has just issued a Request for Information (RFI) to prospective vendors who could develop and supply such randomizers, which TSA expects to deploy at CAT X through CAT IV airports throughout the United States.

“The Randomizers would be used to route passengers randomly to different checkpoint lines,” says the agency’s RFI.

The article lists a bunch of requirements by the TSA for the device.

I’ve seen something like this at customs in, I think, India. Every passenger walks up to a kiosk and presses a button. If the green light turns on, he walks through. If the red light turns on, his bags get searched. Presumably the customs officials can set the search percentage.

Automatic randomized screening is a good idea. It’s free from bias or profiling. It can’t be gamed. These both make it more secure. Note that this is just an RFI from the TSA. An actual program might be years away, and it might not be implemented well. But it’s certainly a start.

EDITED TO ADD (7/19): This is an opposing view. Basically, it’s based on the argument that profiling makes sense, and randomized screening means that you can’t profile. It’s an argument I’ve argued against before.

EDITED TO ADD (8/10): Another argument that profiling does not work.

Posted on July 19, 2013 at 2:45 PM60 Comments

Counterterrorism Mission Creep

One of the assurances I keep hearing about the U.S. government’s spying on American citizens is that it’s only used in cases of terrorism. Terrorism is, of course, an extraordinary crime, and its horrific nature is supposed to justify permitting all sorts of excesses to prevent it. But there’s a problem with this line of reasoning: mission creep. The definitions of “terrorism” and “weapon of mass destruction” are broadening, and these extraordinary powers are being used, and will continue to be used, for crimes other than terrorism.

Back in 2002, the Patriot Act greatly broadened the definition of terrorism to include all sorts of “normal” violent acts as well as non-violent protests. The term “terrorist” is surprisingly broad; since the terrorist attacks of 9/11, it has been applied to people you wouldn’t normally consider terrorists.

The most egregious example of this are the three anti-nuclear pacifists, including an 82-year-old nun, who cut through a chain-link fence at the Oak Ridge nuclear-weapons-production facility in 2012. While they were originally arrested on a misdemeanor trespassing charge, the government kept increasing their charges as the facility’s security lapses became more embarrassing. Now the protestors have been convicted of violent crimes of terrorism—and remain in jail.

Meanwhile, a Tennessee government official claimed that complaining about water quality could be considered an act of terrorism. To the government’s credit, he was subsequently demoted for those remarks.

The notion of making a terrorist threat is older than the current spate of anti-terrorism craziness. It basically means threatening people in order to terrorize them, and can include things like pointing a fake gun at someone, threatening to set off a bomb, and so on. A Texas high-school student recently spent five months in jail for writing the following on Facebook: “I think I’ma shoot up a kindergarten. And watch the blood of the innocent rain down. And eat the beating heart of one of them.” Last year, two Irish tourists were denied entry at the Los Angeles Airport because of some misunderstood tweets.

Another term that’s expanded in meaning is “weapon of mass destruction.” The law is surprisingly broad, and includes anything that explodes, leading political scientist and terrorism-fear skeptic John Mueller to comment:

As I understand it, not only is a grenade a weapon of mass destruction, but so is a maliciously-designed child’s rocket even if it doesn’t have a warhead. On the other hand, although a missile-propelled firecracker would be considered a weapon of mass destruction if its designers had wanted to think of it as a weapon, it would not be so considered if it had previously been designed for use as a weapon and then redesigned for pyrotechnic use or if it was surplus and had been sold, loaned, or given to you (under certain circumstances) by the secretary of the army ….

All artillery, and virtually every muzzle-loading military long arm for that matter, legally qualifies as a WMD. It does make the bombardment of Ft. Sumter all the more sinister. To say nothing of the revelation that The Star Spangled Banner is in fact an account of a WMD attack on American shores.

After the Boston Marathon bombings, one commentator described our use of the term this way: “What the United States means by terrorist violence is, in large part, ‘public violence some weirdo had the gall to carry out using a weapon other than a gun.’ … Mass murderers who strike with guns (and who don’t happen to be Muslim) are typically read as psychopaths disconnected from the larger political sphere.” Sadly, there’s a lot of truth to that.

Even as the definition of terrorism broadens, we have to ask how far we will extend that arbitrary line. Already, we’re using these surveillance systems in other areas. A raft of secret court rulings has recently expanded the NSA’s eavesdropping powers to include “people possibly involved in nuclear proliferation, espionage and cyberattacks.” A “little-noticed provision” in a 2008 law expanded the definition of “foreign intelligence” to include “weapons of mass destruction,” which, as we’ve just seen, is surprisingly broad.

A recent Atlantic essay asks, somewhat facetiously, “If PRISM is so good, why stop with terrorism?” The author’s point was to discuss the value of the Fourth Amendment, even if it makes the police less efficient. But it’s actually a very good question. Once the NSA’s ubiquitous surveillance of all Americans is complete—once it has the ability to collect and process all of our emails, phone calls, text messages, Facebook posts, location data, physical mail, financial transactions, and who knows what else—why limit its use to cases of terrorism? I can easily imagine a public groundswell of support to use to help solve some other heinous crime, like a kidnapping. Or maybe a child-pornography case. From there, it’s an easy step to enlist NSA surveillance in the continuing war on drugs; that’s certainly important enough to warrant regular access to the NSA’s databases. Or maybe to identify illegal immigrants. After all, we’ve already invested in this system, we might as well get as much out of it as we possibly can. Then it’s a short jump to the trivial examples suggested in the Atlantic essay: speeding and illegal downloading. This “slippery slope” argument is largely speculative, but we’ve already started down that incline.

Criminal defendants are starting to demand access to the NSA data that they believe will exonerate themselves. How can a moral government refuse this request?

More humorously, the NSA might have created the best backup system ever.

Technology changes slowly, but political intentions can change very quickly. In 2000, I wrote in my book Secrets and Lies about police surveillance technologies: “Once the technology is in place, there will always be the temptation to use it. And it is poor civic hygiene to install technologies that could someday facilitate a police state.” Today we’re installing technologies of ubiquitous surveillance, and the temptation to use them will be overwhelming.

This essay originally appeared in TheAtlantic.com.

EDITED TO ADD (8/4): Other agencies are already asking to use the NSA data:

Agencies working to curb drug trafficking, cyberattacks, money laundering, counterfeiting and even copyright infringement complain that their attempts to exploit the security agency’s vast resources have often been turned down because their own investigations are not considered a high enough priority, current and former government officials say.

Posted on July 19, 2013 at 9:40 AM46 Comments

Snowden's Dead Man's Switch

Edward Snowden has set up a dead man’s switch. He’s distributed encrypted copies of his document trove to various people, and has set up some sort of automatic system to distribute the key, should something happen to him.

Dead man’s switches have a long history, both for safety (the machinery automatically stops if the operator’s hand goes slack) and security reasons. WikiLeaks did the same thing with the State Department cables.

“It’s not just a matter of, if he dies, things get released, it’s more nuanced than that,” he said. “It’s really just a way to protect himself against extremely rogue behavior on the part of the United States, by which I mean violent actions toward him, designed to end his life, and it’s just a way to ensure that nobody feels incentivized to do that.”

I’m not sure he’s thought this through, though. I would be more worried that someone would kill me in order to get the documents released than I would be that someone would kill me to prevent the documents from being released. Any real-world situation involves multiple adversaries, and it’s important to keep all of them in mind when designing a security system.

Posted on July 18, 2013 at 8:37 AM64 Comments

DHS Puts its Head in the Sand

On the subject of the recent Washington Post Snowden document, the DHS sent this e-mail out to at least some of its employees:

From: xxxxx
Sent: Thursday, July 11, 2013 10:28 AM
To: xxxxx
Cc: xxx Security Reps; xxx SSO; xxxx;xxxx
Subject: //// SECURITY ADVISORY//// NEW WASHINGTON POST WEBPAGE ARTICLE—DO NOT CLICK ON THIS LINK

I have been advised that this article is on the Washington Post’s Website today and has a clickable link title “The NSA Slide you never seen” that must not be opened. This link opens up a classified document which will raise the classification level of your Unclassified workstation to the classification of the slide which is reported to be TS/NF. This has been verified by our Mission Partner and the reason for this email.

If opened on your home or work computer you are obligated to report this to the SSO as your computer could then be considered a classified workstation.

Again, please exercise good judgment when visiting these webpages and clicking on such links. You are violating your Non-Disclosure Agreement in which you promise by signing that you will protect Classified National Security Information. You may be subject to any administrative or legal action from the Government.

SSOs, please pass this on to your respective components as this may be a threat to the systems under your jurisdiction.

This is not just ridiculous, it’s idiotic. Why put DHS employees at a disadvantage by trying to prevent them from knowing what the rest of the world knows? The point of classification is to keep something out of the hands of the bad guys. Once a document is public, the bad guys have access to it. The harm is already done. Can someone think of a reason for this DHS policy other than spite?

Posted on July 17, 2013 at 2:45 PM94 Comments

The Value of Breaking the Law

Interesting essay on the impossibility of being entirely lawful all the time, the balance that results from the difficulty of law enforcement, and the societal value of being able to break the law.

What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.

The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.

Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?

This is very much like my notion of “outliers” in my book Liars and Outliers.

Posted on July 16, 2013 at 12:35 PM32 Comments

A Problem with the US Privacy and Civil Liberties Oversight Board

I haven’t heard much about the Privacy and Civil Liberties Oversight Board. They recently held hearings regarding the Snowden documents.

This particular comment stood out:

Rachel Brand, another seemingly unsympathetic board member, concluded: “There is nothing that is more harmful to civil liberties than terrorism. This discussion here has been quite sterile because we have not been talking about terrorism.”

If terrorism harms civil liberties, it’s because elected officials react in panic and revoke them.

I’m not optimistic about this board.

Posted on July 16, 2013 at 7:11 AM30 Comments

My Fellowship at the Berkman Center

I have been awarded a fellowship at the Berkman Center for Internet and Society at Harvard University, for the 2013–2014 academic year. I’m excited about this; Berkman and Harvard is where a lot of the cool kids hang out, and I’m looking forward to working with them this coming year.

In particular, I have three goals for the year:

  • I want to have my own project. I’ll be continuing to work on my research—and possible book—on security and power and technology. There are a bunch of people I would like to work with at Harvard: Yochai Benkler, Larry Lessig, Jonathan Zittrain, Joseph Nye, Jr., Steven Pinker, Michael Sandel. And others at MIT: Ethan Zuckerman, David Clark. I know I’ve forgotten names.
  • I want to make a difference on a few other Berkman projects. I don’t know what yet, but I know there will be options.
  • I want to work with some students on their projects. There are always interesting student projects, and I would like to be an informal adviser on a few of them. So if any of you are Harvard or MIT students and have a project you think I would be interested in, please e-mail me.

I’m not moving to Boston for the year, but I’ll be there a lot.

Posted on July 13, 2013 at 6:30 PM24 Comments

F2P Monetization Tricks

This is a really interesting article about something I never even thought about before: how games (“F2P” means “free to play”) trick players into paying for stuff.

For example:

This is my favorite coercive monetization technique, because it is just so powerful. The technique involves giving the player some really huge reward, that makes them really happy, and then threatening to take it away if they do not spend. Research has shown that humans like getting rewards, but they hate losing what they already have much more than they value the same item as a reward. To be effective with this technique, you have to tell the player they have earned something, and then later tell them that they did not. The longer you allow the player to have the reward before you take it away, the more powerful is the effect.

This technique is used masterfully in Puzzle and Dragons. In that game the play primarily centers around completing “dungeons.” To the consumer, a dungeon appears to be a skill challenge, and initially it is. Of course once the customer has had enough time to get comfortable with the idea that this is a skill game the difficulty goes way up and it becomes a money game. What is particularly effective here is that the player has to go through several waves of battles in a dungeon, with rewards given after each wave. The last wave is a “boss battle” where the difficulty becomes massive and if the player is in the recommended dungeon for them then they typically fail here. They are then told that all of the rewards from the previous waves are going to be lost, in addition to the stamina used to enter the dungeon (this can be 4 or more real hours of time worth of stamina).

At this point the user must choose to either spend about $1 or lose their rewards, lose their stamina (which they could get back for another $1), and lose their progress. To the brain this is not just a loss of time. If I spend an hour writing a paper and then something happens and my writing gets erased, this is much more painful to me than the loss of an hour. The same type of achievement loss is in effect here. Note that in this model the player could be defeated multiple times in the boss battle and in getting to the boss battle, thus spending several dollars per dungeon.

This technique alone is effective enough to make consumers of any developmental level spend. Just to be safe, PaD uses the same technique at the end of each dungeon again in the form of an inventory cap. The player is given a number of “eggs” as rewards, the contents of which have to be held in inventory. If your small inventory space is exceeded, again those eggs are taken from you unless you spend to increase your inventory space. Brilliant!

It really is a piece about security. These games use all sorts of mental tricks to coerce money from people who would not have spent it otherwise. Tricks include misdirection, sunk costs, withholding information, cognitive dissonance, and prospect theory.

I am reminded of the cognitive tricks scammers use. And, of course, much of the psychology of security.

Posted on July 12, 2013 at 6:37 AM33 Comments

Musing on Secret Languages

This is really interesting. It starts by talking about a “cant” dictionary of 16th-century thieves’ argot, and ends up talking about secret languages in general.

Incomprehension breeds fear. A secret language can be a threat: signifier has no need of signified in order to pack a punch. Hearing a conversation in a language we don’t speak, we wonder whether we’re being mocked. The klezmer-loshn spoken by Jewish musicians allowed them to talk about the families and wedding guests without being overheard. Germanía and Grypsera are prison languages designed to keep information from guards—the first in sixteenth-century Spain, the second in today’s Polish jails. The same logic shows how a secret language need not be the tongue of a minority or an oppressed group: given the right circumstances, even a national language can turn cryptolect. In 1680, as Moroccan troops besieged the short-lived British city of Tangier, Irish soldiers manning the walls resorted to speaking as Gaeilge, in Irish, for fear of being understood by English-born renegades in the Sultan’s armies. To this day, the Irish abroad use the same tactic in discussing what should go unheard, whether bargaining tactics or conversations about taxi-drivers’ haircuts. The same logic lay behind North African slave-masters’ insistence that their charges use the Lingua Franca (a pidgin based on Italian and Spanish and used by traders and slaves in the early modern Mediterranean) so that plots of escape or revolt would not go unheard. A Flemish captive, Emanuel d’Aranda, said that on one slave-galley alone, he heard “the Turkish, the Arabian, Lingua Franca, Spanish, French, Dutch, and English.” On his arrival at Algiers, his closest companion was an Icelander. In such a multilingual environment, the Lingua Franca didn’t just serve for giving orders, but as a means of restricting chatter and intrigue between slaves. If the key element of the secret language is that it obscures the understandings of outsiders, a national tongue can serve just as well as an argot.

Posted on July 10, 2013 at 5:55 AM25 Comments

The Effectiveness of Privacy Audits

This study concludes that there is a benefit to forcing companies to undergo privacy audits: “The results show that there are empirical regularities consistent with the privacy disclosures in the audited financial statements having some effect. Companies disclosing privacy risks are less likely to incur a breach of privacy related to unintentional disclosure of privacy information; while companies suffering a breach of privacy related to credit cards are more likely to disclose privacy risks afterwards. Disclosure after a breach is negatively related to privacy breaches related to hacking, and disclosure before a breach is positively related to breaches concerning insider trading.”

Posted on July 9, 2013 at 12:17 PM6 Comments

Another Perspective on the Value of Privacy

A philosophical perspective:

But while Descartes’s overall view has been rightly rejected, there is something profoundly right about the connection between privacy and the self, something that recent events should cause us to appreciate. What is right about it, in my view, is that to be an autonomous person is to be capable of having privileged access (in the two senses defined above) to information about your psychological profile ­ your hopes, dreams, beliefs and fears. A capacity for privacy is a necessary condition of autonomous personhood.

To get a sense of what I mean, imagine that I could telepathically read all your conscious and unconscious thoughts and feelings—I could know about them in as much detail as you know about them yourself—and further, that you could not, in any way, control my access. You don’t, in other words, share your thoughts with me; I take them. The power I would have over you would of course be immense. Not only could you not hide from me, I would know instantly a great amount about how the outside world affects you, what scares you, what makes you act in the ways you do. And that means I could not only know what you think, I could to a large extent control what you do.

That is the political worry about the loss of privacy: it threatens a loss of freedom. And the worry, of course, is not merely theoretical. Targeted ad programs, like Google’s, which track your Internet searches for the purpose of sending you ads that reflect your interests can create deeply complex psychological profiles—especially when one conducts searches for emotional or personal advice information: Am I gay? What is terrorism? What is atheism? If the government or some entity should request the identity of the person making these searches for national security purposes, we’d be on the way to having a real-world version of our thought experiment.

But the loss of privacy doesn’t just threaten political freedom. Return for a moment to our thought experiment where I telepathically know all your thoughts whether you like it or not From my perspective, the perspective of the knower—your existence as a distinct person would begin to shrink. Our relationship would be so lopsided that there might cease to be, at least to me, anything subjective about you. As I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized.

Posted on July 9, 2013 at 6:24 AM16 Comments

Big Data Surveillance Results in Bad Policy

Evgeny Morozov makes a point about surveillance and big data: it just looks for useful correlations without worrying about causes, and leads people to implement “fixes” based simply on those correlations—rather than understanding and correcting the underlying causes.

As the media academic Mark Andrejevic points out in Infoglut, his new book on the political implications of information overload, there is an immense—but mostly invisible—cost to the embrace of Big Data by the intelligence community (and by just about everyone else in both the public and private sectors). That cost is the devaluation of individual and institutional comprehension, epitomized by our reluctance to investigate the causes of actions and jump straight to dealing with their consequences. But, argues Andrejevic, while Google can afford to be ignorant, public institutions cannot.

“If the imperative of data mining is to continue to gather more data about everything,” he writes, “its promise is to put this data to work, not necessarily to make sense of it. Indeed, the goal of both data mining and predictive analytics is to generate useful patterns that are far beyond the ability of the human mind to detect or even explain.” In other words, we don’t need to inquire why things are the way they are as long as we can affect them to be the way we want them to be. This is rather unfortunate. The abandonment of comprehension as a useful public policy goal would make serious political reforms impossible.

Forget terrorism for a moment. Take more mundane crime. Why does crime happen? Well, you might say that it’s because youths don’t have jobs. Or you might say that’s because the doors of our buildings are not fortified enough. Given some limited funds to spend, you can either create yet another national employment program or you can equip houses with even better cameras, sensors, and locks. What should you do?

If you’re a technocratic manager, the answer is easy: Embrace the cheapest option. But what if you are that rare breed, a responsible politician? Just because some crimes have now become harder doesn’t mean that the previously unemployed youths have finally found employment. Surveillance cameras might reduce crime—even though the evidence here is mixed—but no studies show that they result in greater happiness of everyone involved. The unemployed youths are still as stuck as they were before—only that now, perhaps, they displace anger onto one another. On this reading, fortifying our streets without inquiring into the root causes of crime is a self-defeating strategy, at least in the long run.

Big Data is very much like the surveillance camera in this analogy: Yes, it can help us avoid occasional jolts and disturbances and, perhaps, even stop the bad guys. But it can also blind us to the fact that the problem at hand requires a more radical approach. Big Data buys us time, but it also gives us a false illusion of mastery.

Posted on July 8, 2013 at 11:50 AM28 Comments

Protecting E-Mail from Eavesdropping

In the wake of the Snowden NSA documents, reporters have been asking me whether encryption can solve the problem. Leaving aside the fact that much of what the NSA is collecting can’t be encrypted by the user—telephone metadata, e-mail headers, phone calling records, e-mail you’re reading from a phone or tablet or cloud provider, anything you post on Facebook—it’s hard to give good advice.

In theory, an e-mail program will protect you, but the reality is much more complicated.

  • The program has to be vulnerability-free. If there is some back door in the program that bypasses, or weakens, the encryption, it’s not secure. It’s very difficult, almost impossible, to verify that a program is vulnerability-free.
  • The user has to choose a secure password. Luckily, there’s advice on how to do this.
  • The password has to be managed securely. The user can’t store it in a file somewhere. If he’s worried about security for after the FBI has arrested him and searched his house, he shouldn’t write it on a piece of paper, either.
  • Actually, he should understand the threat model he’s operating under. Is it the NSA trying to eavesdrop on everything, or an FBI investigation that specifically targets him—or a targeted attack, like dropping a Trojan on his computer, that bypasses e-mail encryption entirely?

This is simply too much for the poor reporter, who wants an easy-to-transcribe answer.

We’ve known how to send cryptographically secure e-mail since the early 1990s. Twenty years later, we’re still working on the security engineering of e-mail programs. And if the NSA is eavesdropping on encrypted e-mail, and if the FBI is decrypting messages from suspects’ hard drives, they’re both breaking the engineering, not the underlying cryptographic algorithms.

On the other hand, the two adversaries can be very different. The NSA has to process a ginormous amount of traffic. It’s the “drinking from a fire hose” problem; they cannot afford to devote a lot of time to decrypting everything, because they simply don’t have the computing resources. There’s just too much data to collect. In these situations, even a modest level of encryption is enough—until you are specifically targeted. This is why the NSA saves all encrypted data it encounters; it might want to devote cryptanalysis resources to it at some later time.

Posted on July 8, 2013 at 6:43 AM61 Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PM35 Comments

Sixth Movie-Plot Threat Contest Winner

On April 1, I announced the Sixth Mostly-Annual Movie-Plot Threat Contest:

For this year’s contest, I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

On May 15, I announced the five semi-finalists. Voting continued through the end of the month, and the winner is Russell Thomas:

It’s November 2015 and the United Nations Climate Change Conference (UNCCC) is underway in Amsterdam, Netherlands. Over the past year, ocean level rise has done permanent damage to critical infrastructure in Maldives, killing off tourism and sending the economy into freefall. The Small Island Developing States are demanding immediate relief from the Green Climate Fund, but action has been blocked. Conspiracy theories flourish. For months, the rhetoric between developed and developing countries has escalated to veiled and not-so-veiled threats. One person in elites of the Small Island Developing States sees an opportunity to force action.

He’s Sayyid Abdullah bin Yahya, an Indonesian engineer and construction magnate with interests in Bahrain, Bangladesh, and Maldives, all directly threatened by recent sea level rise. Bin Yahya’s firm installed industrial control systems on several flood control projects, including in the Maldives, but these projects are all stalled and unfinished for lack of financing. He also has a deep, abiding enmity against Holland and the Dutch people, rooted in the 1947 Rawagede massacre that killed his grandfather and father. Like many Muslims, he declared that he was personally insulted by Queen Beatrix’s gift to the people of Indonesia on the 50th anniversary of the massacre—a Friesian cow. “Very rude. That’s part of the Dutch soul, this rudeness”, he said at the time. Also like many Muslims, he became enraged and radicalized in 2005 when the Dutch newspaper Jyllands-Posten published cartoons of the Prophet.

Of all the EU nations, Holland is most vulnerable to rising sea levels. It has spent billions on extensive barriers and flood controls, including the massive Oosterscheldekering storm surge barrier, designed and built in the 80s to protect against a 10,000-year storm surge. While it was only used 24 times between 1986 and 2010, in the last two years the gates have been closed 46 times.

As the UNCCC conference began in November 2015, the Oosterscheldekering was closed yet again to hold off the surge of an early winter storm. Even against low expectations, the first day’s meetings went very poorly. A radicalized and enraged delegation from the Small Island Developing States (SIDS) presented an ultimatum, leading to denunciations and walkouts. “What can they do—start a war?” asked the Dutch Minister of Infrastructure and the Environment in an unguarded moment. There was talk of canceling the rest of the conference.

Overnight, there are a series of news stories in China, South America, and United States reporting malfunctions of dams that resulted in flash floods and death of tens or hundreds people in several cases. Web sites associated with the damns were all defaced with the text of the SIDS ultimatum. In the morning, all over Holland there were reports of malfunctions of control equipment associated with flood monitoring and control systems. The winter storm was peaking that day with an expected surge of 7 meters (22 feet), larger than the Great Flood of 1953. With the Oosterscheldekering working normally, this is no worry. But at 10:43am, the storm gates unexpectedly open.

Microsoft Word claims it’s 501 words, but I’m letting that go.

This is the first professional—a researcher—who has won the contest. Be sure to check out his blogs, and his paper at WEIS this year.

Congratulations, Russell Thomas. Your box of fabulous prizes will be on its way to you soon.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner. The Fifth Movie-Plot Threat Contest rules, semifinalists, and winner.

Posted on July 5, 2013 at 12:08 PM27 Comments

Is Cryptography Engineering or Science?

Responding to a tweet by Thomas Ptacek saying, “If you’re not learning crypto by coding attacks, you might not actually be learning crypto,” Colin Percival published a well-thought-out rebuttal, saying in part:

If we were still in the 1990s, I would agree with Thomas. 1990s cryptography was full of holes, and the best you could hope for was to know how your tools were broken so you could try to work around their deficiencies. This was a time when DES and RC4 were widely used, despite having well-known flaws. This was a time when people avoided using CTR mode to convert block ciphers into stream ciphers, due to concern that a weak block cipher could break if fed input blocks which shared many (zero) bytes in common. This was a time when people cared about the “error propagation” properties of block ciphers ­ that is, how much of the output would be mangled if a small number of bits in the ciphertext are flipped. This was a time when people routinely advised compressing data before encrypting it, because that “compacted” the entropy in the message, and thus made it “more difficult for an attacker to identify when he found the right key”. It should come as no surprise that SSL, designed during this era, has had a long list of design flaws.

Cryptography in the 2010s is different. Now we start with basic components which are believed to be highly secure ­ e.g., block ciphers which are believed to be indistinguishable from random permutations ­ and which have been mathematically proven to be secure against certain types of attacks ­ e.g., AES is known to be immune to differential cryptanalysis. From those components, we then build higher-order systems using mechanisms which have been proven to not introduce vulnerabilities. For example, if you generate an ordered sequence of packets by encrypting data using an indistinguishable-from-random-permutation block cipher (e.g., AES) in CTR mode using a packet sequence number as the CTR nonce, and then append a weakly-unforgeable MAC (e.g., HMAC-SHA256) of the encrypted data and the packet sequence number, the packets both preserve privacy and do not permit any undetected tampering (including replays and reordering of packets). Life will become even better once Keccak (aka. SHA-3) becomes more widely reviewed and trusted, as its “sponge” construction can be used to construct—with provable security—a very wide range of important cryptographic components.

He recommends a more modern approach to cryptography: “studying the theory and designing systems which you can prove are secure.”

I think both of statements are true—and not contradictory at all. The apparent disagreement stems from differing definitions of cryptography.

Many years ago, on the Cryptographer’s Panel at an RSA conference, then-chief scientist for RSA Bert Kaliski talked about the rise of something he called the “crypto engineer.” His point was that the practice of cryptography was changing. There was the traditional mathematical cryptography—designing and analyzing algorithms and protocols, and building up cryptographic theory—but there was also a more practice-oriented cryptography: taking existing cryptographic building blocks and creating secure systems out of them. It’s this latter group he called crypto engineers. It’s the group of people I wrote Applied Cryptography, and, most recently, co-wrote Cryptography Engineering, for. Colin knows this, directing his advice to “developers”—Kaliski’s crypto engineers.

Traditional cryptography is a science—applied mathematics—and applied cryptography is engineering. I prefer the term “security engineering,” because it necessarily encompasses a lot more than cryptography—see Ross Andersen’s great book of that name. And mistakes in engineering are where a lot of real-world cryptographic systems break.

Provable security has its limitations. Cryptographer Lars Knudsen once said: “If it’s provably secure, it probably isn’t.” Yes, we have provably secure cryptography, but those proofs take very specific forms against very specific attacks. They reduce the number of security assumptions we have to make about a system, but we still have to make a lot of security assumptions.

And cryptography has its limitations in general, despite the apparent strengths. Cryptography’s great strength is that it gives the defender a natural advantage: adding a single bit to a cryptographic key increases the work to encrypt by only a small amount, but doubles the work required to break the encryption. This is how we design algorithms that—in theory—can’t be broken until the universe collapses back on itself.

Despite this, cryptographic systems are broken all the time: well before the heat death of the universe. They’re broken because of software mistakes in coding the algorithms. They’re broken because the computer’s memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They’re broken because of buffer overflows and other security flaws. They’re broken by side-channel attacks. They’re broken because of bad user interfaces, or insecure user practices.

Lots of people have said: “In theory, theory and practice are the same. But in practice, they are not.” It’s true about cryptography. If you want to be a cryptographer, study mathematics. Study the mathematics of cryptography, and especially cryptanalysis. There’s a lot of art to the science, and you won’t be able to design good algorithms and protocols until you gain experience in breaking existing ones. If you want to be a security engineer, study implementations and coding. Take the tools cryptographers create, and learn how to use them well.

The world needs security engineers even more than it needs cryptographers. We’re great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.

After writing this, I found a conversation between the two where they both basically agreed with me.

Posted on July 5, 2013 at 7:04 AM24 Comments

The Office of the Director of National Intelligence Defends NSA Surveillance Programs

Here’s a transcript of a panel discussion about NSA surveillance. There’s a lot worth reading here, but I want to quote Bob Litt’s opening remarks. He’s the General Counsel for ODNI, and he has a lot to say about the programs revealed so far in the Snowden documents.

I’m reminded a little bit of a quote that, like many quotes, is attributed to Mark Twain but in fact is not Mark Twain’s, which is that a lie can get halfway around the world before the truth gets its boots on. And unfortunately, there’s been a lot of misinformation that’s come out about these programs. And what I would like to do in the next couple of minutes is actually go through and explain what the programs are and what they aren’t.

I particularly want to emphasize that I hope you come away from this with the understanding that neither of the programs that have been leaked to the press recently are indiscriminate sweeping up of information without regard to privacy or constitutional rights or any kind of controls. In fact, from my boss, the director of national intelligence, on down through the entire intelligence community, we are in fact sensitive to privacy and constitutional rights. After all, we are citizens of the United States. These are our rights too.

So as I said, we’re talking about two types of intelligence collection programs. I want to start discussing them by making the point that in order to target the emails or the phone calls or the communications of a United States citizen or a lawful permanent resident of the United States, wherever that person is located, or of any person within the United States, we need to go to court, and we need to get an individual order based on probable cause, the equivalent of an electronic surveillance warrant.

That does not mean and nobody has ever said that that means we never acquire the contents of an email or telephone call to which a United States person is a party. Whenever you’re doing any collection of information, you’re going to—you can’t avoid some incidental acquisition of information about nontargeted persons. Think of a wiretap in a criminal case. You’re wiretapping somebody, and you intercept conversations that are innocent as well as conversations that are inculpatory. If we seize somebody’s computer, there’s going to be information about innocent people on that. This is just a necessary incident.

What we do is we impose controls on the use of that information. But what we cannot do—and I’m repeating this—is go out and target the communications of Americans for collection without an individual court order.

So the first of the programs that I want to talk about that was leaked to the press is what’s been called Section 215, or business record collection. It’s called Section 215 because that was the section of the Patriot Act that put the current version of that statute into place. And under that ­ this statute, we collect telephone metadata, using a court order which is authorized by the Foreign Intelligence Surveillance Act, under a provision which allows a government to obtain business records for intelligence and counterterrorism purposes. Now, by metadata, in this context, I mean data that describes the phone calls, such as the telephone number making the call, the telephone number dialed, the data and time the call was made and the length of the call. These are business records of the telephone companies in question, which is why they can be collected under this provision.

Despite what you may have read about this program, we do not collect the content of any communications under this program. We do not collect the identity of any participant to any communication under this program. And while there seems to have been some confusion about this as recently as today, I want to make perfectly clear we do not collect cellphone location information under this program, either GPS information or cell site tower information. I’m not sure why it’s been so hard to get people to understand that because it’s been said repeatedly.

When the court approves collection under this statute, it issues two orders. One order, which is the one that was leaked, is an order to providers directing them to turn the relevant information over to the government. The other order, which was not leaked, is the order that spells out the limitations on what we can do with the information after it’s been collected, who has access, what purposes they can access it for and how long it can be retained.

Some people have expressed concern, which is quite a valid concern in the abstract, that if you collect large quantities of metadata about telephone calls, you could subject it to sophisticated analysis, and using those kind of analytical tools, you can derive a lot of information about people that would otherwise not be discoverable.

The fact is, we are specifically not allowed to do that kind of analysis of this data, and we don’t do it. The metadata that is acquired and kept under this program can only be queried when there is reasonable suspicion, based on specific, articulable facts, that a particular telephone number is associated with specified foreign terrorist organizations. And the only purpose for which we can make that query is to identify contacts. All that we get under this program, all that we collect, is metadata. So all that we get back from one of these queries is metadata.

Each determination of a reasonable suspicion under this program must be documented and approved, and only a small portion of the data that is collected is ever actually reviewed, because the vast majority of that data is never going to be responsive to one of these terrorism-related queries.

In 2012 fewer than 300 identifiers were approved for searching this data. Nevertheless, we collect all the data because if you want to find a needle in the haystack, you need to have the haystack, especially in the case of a terrorism-related emergency, which is—and remember that this database is only used for terrorism-related purposes.

And if we want to pursue any further investigation as a result of a number that pops up as a result of one of these queries, we have to do, pursuant to other authorities and in particular if we want to conduct electronic surveillance of any number within the United States, as I said before, we have to go to court, we have to get an individual order based on probable cause.

That’s one of the two programs.

The other program is very different. This is a program that’s sometimes referred to as PRISM, which is a misnomer. PRISM is actually the name of a database. The program is collection under Section 702 of the Foreign Intelligence Surveillance Act, which is a public statute that is widely known to everybody. There’s really no secret about this kind of collection.

This permits the government to target a non-U.S. person, somebody who’s not a citizen or a permanent resident alien, located outside of the United States, for foreign intelligence purposes without obtaining a specific warrant for each target, under the programmatic supervision of the FISA Court.

And it’s important here to step back and note that historically and at the time FISA was originally passed in 1978, this particular kind of collection, targeting non-U.S. persons outside of the United States for foreign intelligence purposes, was not intended to be covered by FISA as ­ at all. It was totally outside of the supervision of the FISA Court and totally within the prerogative of the executive branch. So in that respect, Section 702 is properly viewed as an expansion of FISA Court authority, rather than a contraction of that authority.

So Section 702, as I—as I said, it’s—is limited to targeting foreigners outside the United States to acquire foreign intelligence information. And there is a specific provision in this statute that prohibits us from making an end run about this, about—on this requirement, because we are expressly prohibited from targeting somebody outside of the United States in order to obtain some information about somebody inside the United States. That is to say, if we know that somebody outside of the United States is communicating with Spike Bowman, and we really want to get Spike Bowman’s communications, we’ve got to get an electronic surveillance order on Spike Bowman. We cannot target the out ­ the person outside of the United States to collect on Spike.

In order to use Section 702, the government has to obtain approval from the FISA Court for the plan it intends to use to conduct the collection. This plan includes, first of all, identification of the foreign intelligence purposes of the collection; second, the plan and the procedures for ensuring that the individuals targeted for collection are in fact non-U.S. persons who are located outside of the United States. These are referred to as targeting procedures. And in addition, we have to get approval of the government’s procedures for what it will do with information about a U.S. person or someone inside the United States if we get that information through this collection. These procedures, which are called minimization procedures, determine what we can keep and what we can disseminate to other government agencies and impose limitations on that. And in particular, dissemination of information about U.S. persons is expressly prohibited unless that information is necessary to understand foreign intelligence or to assess its importance or is evidence of a crime or indicates a—an imminent threat of death or serious bodily harm.

And again, these procedures, the targeting and minimization procedures, have to be approved by the FISA court as consistent with the statute and consistent with the Fourth Amendment. And that’s what the Section 702 collection is.

The last thing I want to talk about a little bit is the myth that this is sort of unchecked authority, because we have extensive oversight and control over the collection, which involves all three branches of government. First, NSA has extensive technological processes, including segregated databases, limited access and audit trails, and they have extensive internal oversight, including their own compliance officer, who oversees compliance with the rules.

Second, the Department of Justice and my office, the Office of the Director of National Intelligence, are specifically charged with overseeing NSA’s activities to make sure that there are no compliance problems. And we report to the Congress twice a year on the use of these collection authorities and compliance problems. And if we find a problem, we correct it. Inspectors general, independent inspectors general, who, as you all know, also have an independent reporting responsibility to Congress, also are charged with undertaking a review of how these surveillance programs are carried out.

Any time that information is collected in violation of the rules, it’s reported immediately to the FISA court and is also reported to the relevant congressional oversight committees. It doesn’t matter how small the—or technical the violation is. And information that’s collected in violation of the rules has to be purged, with very limited exceptions.

Both the FISA court and the congressional oversight committees, which are Intelligence and Judiciary, take a very active role in overseeing this program and ensuring that we adhere to the requirements of the statutes and the court orders. And let me just stop and say that the suggestion that the FISA court is a rubber stamp is a complete canard, as anybody who’s ever had the privilege of appearing before Judge Bates or Judge Walton can attest.

Now, this is a complex system, and like any complex system, it’s not error free. But as I said before, every time we have found a mistake, we’ve fixed it. And the mistakes are self-reported. We find them ourselves in the exercise of our oversight. No one has ever found that there has ever been—and by no one, I mean the people at NSA, the people at the Department of Justice, the people at the Office of the Director of National Intelligence, the inspectors general, the FISA court and the congressional oversight committees, all of whom have visibility into this—nobody has ever found that there has ever been any intentional effort to violate the law or any intentional misuse of these tools.

As always, the fundamental issue is trust. If you believe Litt, this is all very comforting. If you don’t, it’s more lies and misdirection. Taken at face value, it explains why so many tech executives were able to say they had never heard of PRISM: it’s the internal NSA name for the database, and not the name of the program. I also note that Litt uses the word “collect” to mean what it actually means, and not the way his boss, Director of National Intelligence James Clapper, Jr., used it to deliberately lie to Congress.

Posted on July 4, 2013 at 7:07 AM72 Comments

Privacy Protests

Interesting law journal article: “Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion,” by Elizabeth E. Joh.

Abstract: The police tend to think that those who evade surveillance are criminals. Yet the evasion may only be a protest against the surveillance itself. Faced with the growing surveillance capacities of the government, some people object. They buy “burners” (prepaid phones) or “freedom phones” from Asia that have had all tracking devices removed, or they hide their smartphones in ad hoc Faraday cages that block their signals. They use to surf the internet. They identify tracking devices with GPS detectors. They avoid credit cards and choose cash, prepaid debit cards, or bitcoins. They burn their garbage. At the extreme end, some “live off the grid” and cut off all contact with the modern world.

These are all examples of what I call privacy protests: actions individuals take to block or to thwart government surveillance for reasons that are unrelated to criminal wrongdoing. Those engaged in privacy protests do so primarily because they object to the presence of perceived or potential government surveillance in their lives. How do we tell the difference between privacy protests and criminal evasions, and why does it matter? Surprisingly scant attention has been given to these questions, in part because Fourth Amendment law makes little distinction between ordinary criminal evasions and privacy protests. This article discusses the importance of these ordinary acts of resistance, their place in constitutional criminal procedure, and their potential social value in the struggle over the meaning of privacy.

Read this while thinking about the lack of any legal notion of civil disobedience in cyberspace.

Posted on July 3, 2013 at 12:30 PM40 Comments

US Department of Defense Censors Snowden Story

The US Department of Defense is blocking sites that are reporting about the Snowden documents. I presume they’re not censoring sites that are smearing him personally. Note that the DoD is only blocking those sites on its own network, not on the Internet at large. The blocking is being done by automatic filters, presumably the same ones used to block porn or other sites it deems inappropriate.

Anyone know if my blog is being censored? I’m kinda curious.

Posted on July 3, 2013 at 6:02 AM63 Comments

Security Analysis of Children

This is a really good paper describing the unique threat model of children in the home, and the sorts of security philosophies that are effective in dealing with them. Stuart Schechter, “The User IS the Enemy, and (S)he Keeps Reaching for that Bright Shiny Power Button!” Definitely worth reading.

Abstract: Children represent a unique challenge to the security and privacy considerations of the home and technology deployed within it. While these challenges posed by children have long been researched, there is a gaping chasm between the traditional approaches technologists apply to problems of security and privacy and the approaches used by those who deal with this adversary on a regular basis. Indeed, addressing adversarial threats from children via traditional approaches to computer and information security would be a recipe for disaster: it is rarely appropriate to remove a child’s access to the home or its essential systems; children require flexibility; children are often threats to themselves; and children may use the home as a theater of conflict with each other. Further, the goals of security and privacy must be adjusted to account for the needs of childhood development. A home with perfect security—one that prevented all inappropriate behavior or at least ensured that it was recorded so that the adversary could be held accountable—could severely stunt children’s moral and personal growth. We discuss the challenges posed by children and childhood on technologies for the home, the philosophical gap between parenting and security technologists, and design approaches that technology designers could borrow when building systems to be deployed within homes containing this special class of user/adversary.

Posted on July 2, 2013 at 12:08 PM21 Comments

NSA E-Mail Eavesdropping

More Snowden documents analyzed by the Guardiantwo articles—discuss how the NSA collected e-mails and data on Internet activity of both Americans and foreigners. The program might have ended in 2011, or it might have continued under a different name. This is the program that resulted in that bizarre tale of Bush officials confronting then-Attorney General John Ashcroft in his hospital room; the New York Times story discusses that. What’s interesting is that the NSA collected this data under one legal pretense. When that justification evaporated, they searched around until they found another pretense.

This story is being picked up a bit more than the previous story, but it’s obvious that the press is fatiguing of this whole thing. Without the Ashcroft human interest bit, it would be just another story of the NSA eavesdropping on Americans—and that’s lasts week’s news.

Posted on July 2, 2013 at 6:49 AM12 Comments

How the NSA Eavesdrops on Americans

Two weeks ago, the Guardian published two new Snowden documents. These outline how the NSA’s data-collection procedures allow it to collect lots of data on Americans, and how the FISA court fails to provide oversight over these procedures.

The documents are complicated, but I strongly recommend that people read both the Guardian analysis and the EFF analysis—and possibly the USA Today story.

Frustratingly, this has not become a major news story. It isn’t being widely reported in the media, and most people don’t know about it. At this point, the only aspect of the Snowden story that is in the news is the personal story. The press seems to have had its fill of the far more important policy issues.

I don’t know what there is that can be done about this, but it’s how we all lose.

Posted on July 1, 2013 at 12:16 PM31 Comments

SIMON and SPECK: New NSA Encryption Algorithms

The NSA has published some new symmetric algorithms:

Abstract: In this paper we propose two families of block ciphers, SIMON and SPECK, each of which comes in a variety of widths and key sizes. While many lightweight block ciphers exist, most were designed to perform well on a single platform and were not meant to provide high performance across a range of devices. The aim of SIMON and SPECK is to fill the need for secure, flexible, and analyzable lightweight block ciphers. Each offers excellent performance on hardware and software platforms, is flexible enough to admit a variety of implementations on a given platform, and is amenable to analysis using existing techniques. Both perform exceptionally well across the full spectrum of lightweight applications, but SIMON is tuned for optimal performance in hardware, and SPECK for optimal performance in software.

It’s always fascinating to study NSA-designed ciphers. I was particularly interested in the algorithms’ similarity to Threefish, and how they improved on what we did. I was most impressed with their key schedule. I am always impressed with how the NSA does key schedules. And I enjoyed the discussion of requirements. Missing, of course, is any cryptanalytic analysis.

I don’t know anything about the context of this paper. Why was the work done, and why is it being made public? I’m curious.

Posted on July 1, 2013 at 6:24 AM34 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.