Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they're nothing of the sort. They're pure security theater: They look good without doing anything to make us safer. We're stuck with them because of a combination of buck passing, CYA thinking, and fear.
As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren't very sensitive -- people with phones and keys in their pockets are sailing through -- and there are no X-ray machines. Bags get the same cursory search they've gotten for years. And fans wanting to avoid the detectors can opt for a "light pat-down search" instead.
There's no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who's carrying a gun or knife into the stadium. That may be a good idea, but unless there's been a recent spate of fan shootings and stabbings at baseball games -- and there hasn't -- this is a whole lot of time and money being spent to combat an imaginary threat.
But imaginary threats are the only ones baseball executives have to stop this season; there's been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.
This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams "work closely" with DHS. DHS can claim that it was MLB's initiative. And both can safely relax because if something happens, at least they did something.
It's an attitude I've seen before: "Something must be done. This is something. Therefore, we must do it." Never mind if the something makes any sense or not.
In reality, this is CYA security, and it's pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it's cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won't be blamed for inaction. It's security, all right -- security for the careers of those in charge.
I'm not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They're not the ones who have to deal with the long lines and confusion at the gates. They're not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they've arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners' point of view.
I can hear the objections to this as I write. You don't know these measures won't be effective! What if something happens? Don't we have to do everything possible to protect ourselves against terrorism?
That's worst-case thinking, and it's dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.
So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem -- most of us don't care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren't worth a boycott. But there's an undercurrent of fear as well. If it's in the name of security, we'll accept it. As long as our leaders are scared of the terrorists, they're going to continue the security theater. And we're similarly going to accept whatever measures are forced upon us in the name of security. We're going to accept the National Security Agency's surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We're going to continue to waste money overreacting to irrational fears.
We no longer need the terrorists. We're now so good at terrorizing ourselves.
This essay previously appeared in the Washington Post.
Paul Krugman argues that we'll give up our privacy because we want to emulate the rich, who are surrounded by servants who know everything about them:
Consider the Varian rule, which says that you can forecast the future by looking at what the rich have today -- that is, that what affluent people will want in the future is, in general, something like what only the truly rich can afford right now. Well, one thing that's very clear if you spend any time around the rich -- and one of the very few things that I, who by and large never worry about money, sometimes envy -- is that rich people don't wait in line. They have minions who ensure that there's a car waiting at the curb, that the maitre-d escorts them straight to their table, that there's a staff member to hand them their keys and their bags are already in the room.
And it's fairly obvious how smart wristbands could replicate some of that for the merely affluent. Your reservation app provides the restaurant with the data it needs to recognize your wristband, and maybe causes your table to flash up on your watch, so you don't mill around at the entrance, you just walk in and sit down (which already happens in Disney World.) You walk straight into the concert or movie you've bought tickets for, no need even to have your phone scanned. And I'm sure there's much more -- all kinds of context-specific services that you won't even have to ask for, because systems that track you know what you're up to and what you're about to need.
Daniel C. Dennett and Deb Roy look at our loss of privacy in evolutionary terms, and see all sorts of adaptations coming:
The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before -- and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.
The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness.
Citizen Lab has issued a report on China's "Great Cannon" attack tool, used in the recent DDoS attack against GitHub.
We show that, while the attack infrastructure is co-located with the Great Firewall, the attack was carried out by a separate offensive system, with different capabilities and design, that we term the "Great Cannon." The Great Cannon is not simply an extension of the Great Firewall, but a distinct attack tool that hijacks traffic to (or presumably from) individual IP addresses, and can arbitrarily replace unencrypted content as a man-in-the-middle.
The operational deployment of the Great Cannon represents a significant escalation in state-level information control: the normalization of widespread use of an attack tool to enforce censorship by weaponizing users. Specifically, the Cannon manipulates the traffic of "bystander" systems outside China, silently programming their browsers to create a massive DDoS attack. While employed for a highly visible attack in this case, the Great Cannon clearly has the capability for use in a manner similar to the NSA's QUANTUM system, affording China the opportunity to deliver exploits targeting any foreign computer that communicates with any China-based website not fully utilizing HTTPS.
It's kind of hard for the US to complain about this kind of thing, since we do it too.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
John Mueller suggests an alternative to the FBI's practice of encouraging terrorists and then arresting them for something they would have never have planned on their own:
The experience with another case can be taken to suggest that there could be an alternative, and far less costly, approach to dealing with would-be terrorists, one that might generally (but not always) be effective at stopping them without actually having to jail them.
It involves a hothead in Virginia who ranted about jihad on Facebook, bragging about how "we dropped the twin towers." He then told a correspondent in New Orleans that he was going to bomb the Washington, D.C. Metro the next day. Not wanting to take any chances and not having the time to insinuate an informant, the FBI arrested him. Not surprisingly, they found no bomb materials in his possession. Since irresponsible bloviating is not illegal (if it were, Washington would quickly become severely underpopulated), the police could only charge him with a minor crime -- making an interstate threat. He received only a good scare, a penalty of time served and two years of supervised release.
That approach seems to have worked: the guy seems never to have been heard from again. It resembles the Secret Service's response when they get a tip that someone has ranted about killing the president. They do not insinuate an encouraging informant into the ranter's company to eventually offer crucial, if bogus, facilitating assistance to the assassination plot. Instead, they pay the person a Meaningful Visit and find that this works rather well as a dissuasion device. Also, in the event of a presidential trip to the ranter's vicinity, the ranter is visited again. It seems entirely possible that this approach could productively be applied more widely in terrorism cases. Ranting about killing the president may be about as predictive of violent action as ranting about the virtues of terrorism to deal with a political grievance. The terrorism cases are populated by many such ranters -- indeed, tips about their railing have frequently led to FBI involvement. It seems likely, as apparently happened in the Metro case, that the ranter could often be productively deflected by an open visit from the police indicating that they are on to him. By contrast, sending in a paid operative to worm his way into the ranter's confidence may have the opposite result, encouraging, even gulling, him toward violence.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.