John Mueller and Mark Stewart ask the important questions about the NSA surveillance programs: why were they secret, what have they accomplished, and what do they cost?
Entries Tagged "cost-benefit analysis"
Page 4 of 23
This study claims “terrorism has cost Pakistan around 33.02% of its real national income” between the years 1973 and 2008, or about 1% per year.
The St. Louis Fed puts the real gross national income of the U.S. at about $13 trillion total, hand-waving an average over the past few years. The best estimate I’ve seen for the increased cost of homeland security in the U.S. in the ten years since 9/11 is $100 billion per year. So that puts the cost of terrorism in the US at about 0.8%—surprisingly close to the Pakistani number.
The interesting thing is that the expenditures are completely different. In Pakistan, the cost is primarily “a fall in domestic investment and lost workers’ remittances from abroad.” In the US, it’s security measures, including the invasion of Iraq.
I remember reading somewhere that about a third of all food spoils. In poor countries, that spoilage primarily happens during production and transport. In rich countries, that spoilage primarily happens after the consumer buys the food. Same rate of loss, completely different causes. This reminds me of that.
The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it’s really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.
As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.
What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google’s servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. Companies that refuse to comply would be fined $25,000 a day.
The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.
This is an old debate, and one we’ve been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan “Don’t buy the U.S. stuff—it’s lousy.”
In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?
In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.
In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google’s system for providing surveillance data for the FBI.
The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems—not very difficult—or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don’t understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure—and less secure—product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won’t buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?
The FBI’s short–sighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.
What to do, then? The FBI claims that the Internet is “going dark,” and that it’s simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there’s more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI’s surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype’s security.) That’s just part of the evolution of technology, and one that on balance is a positive thing.
Think of it this way: We don’t hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn’t good enough.
Finally there’s a general principle at work that’s worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.
This essay originally appeared in Foreign Policy.
Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.
Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.
The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.
The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”
The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.
Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.
The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.
For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.
Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.
This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.
We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.
Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.
What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.
Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.
This essay originally appeared on CNN.com.
EDITED TO ADD (6/4): This essay has been translated into Swedish.
EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.
This post by Aleatha Parker-Wood is very applicable to the things I wrote in Liars & Outliers:
A lot of fundamental social problems can be modeled as a disconnection between people who believe (correctly or incorrectly) that they are playing a non-iterated game (in the game theory sense of the word), and people who believe that (correctly or incorrectly) that they are playing an iterated game.
For instance, mechanisms such as reputation mechanisms, ostracism, shaming, etc., are all predicated on the idea that the person you’re shaming will reappear and have further interactions with the group. Legal punishment is only useful if you can catch the person, and if the cost of the punishment is more than the benefit of the crime.
If it is possible to act as if the game you are playing is a one-shot game (for instance, you have a very large population to hide in, you don’t need to ever interact with people again, or you can be anonymous), your optimal strategies are going to be different than if you will have to play the game many times, and live with the legal or social consequences of your actions. If you can make enough money as CEO to retire immediately, you may choose to do so, even if you’re so terrible at running the company that no one will ever hire you again.
Social cohesion can be thought of as a manifestation of how “iterated” people feel their interactions are, how likely they are to interact with the same people again and again and have to deal with long term consequences of locally optimal choices, or whether they feel they can “opt out” of consequences of interacting with some set of people in a poor way.
That’s pretty cool, and I can imagine all sorts of reasons to get one of those. But I’m sure there are all sorts of unforeseen security vulnerabilities in this system. And even worse, a single vulnerability can affect all the locks. Remember that vulnerability found last year in hotel electronic locks?
Anyone care to guess how long before some researcher finds a way to hack this one? And how well the maker anticipated the need to update the firmware to fix the vulnerability once someone finds it?
I’m not saying that you shouldn’t use this lock, only that you understand that new technology brings new security risks, and electronic technology brings new kinds of security risks. Security is a trade-off, and the trade-off is particularly stark in this case.
As part of the fallout of the Boston bombings, we’re probably going to get some new laws that give the FBI additional investigative powers. As with the Patriot Act after 9/11, the debate over whether these new laws are helpful will be minimal, but the effects on civil liberties could be large. Even though most people are skeptical about sacrificing personal freedoms for security, it’s hard for politicians to say no to the FBI right now, and it’s politically expedient to demand that something be done.
If our leaders can’t say no—and there’s no reason to believe they can—there are two concepts that need to be part of any new counterterrorism laws, and investigative laws in general: transparency and accountability.
Long ago, we realized that simply trusting people and government agencies to always do the right thing doesn’t work, so we need to check up on them. In a democracy, transparency and accountability are how we do that. It’s how we ensure that we get both effective and cost-effective government. It’s how we prevent those we trust from abusing that trust, and protect ourselves when they do. And it’s especially important when security is concerned.
First, we need to ensure that the stuff we’re paying money for actually works and has a measureable impact. Law-enforcement organizations regularly invest in technologies that don’t make us any safer. The TSA, for example, could devote an entire museum to expensive but ineffective systems: puffer machines, body scanners, FAST behavioral screening, and so on. Local police departments have been wasting lots of post-9/11 money on unnecessary high-tech weaponry and equipment. The occasional high-profile success aside, police surveillance cameras have been shown to be a largely ineffective police tool.
Sometimes honest mistakes led organizations to invest in these technologies. Sometimes there’s self-deception and mismanagement—and far too often lobbyists are involved. Given the enormous amount of security money post-9/11, you inevitably end up with an enormous amount of waste. Transparency and accountability are how we keep all of this in check.
Second, we need to ensure that law enforcement does what we expect it to do and nothing more. Police powers are invariably abused. Mission creep is inevitable, and it results in laws designed to combat one particular type of crime being used for an ever-widening array of crimes. Transparency is the only way we have of knowing when this is going on.
For example, that’s how we learned that the FBI is abusing National Security Letters. Traditionally, we use the warrant process to protect ourselves from police overreach. It’s not enough for the police to want to conduct a search; they also need to convince a neutral third party—a judge—that the search is in the public interest and will respect the rights of those searched. That’s accountability, and it’s the very mechanism that NSLs were exempted from.
When laws are broken, accountability is how we punish those who abused their power. It’s how, for example, we correct racial profiling by police departments. And it’s a lack of accountability that permits the FBI to get away with massive data collection until exposed by a whistleblower or noticed by a judge.
Third, transparency and accountability keep both law enforcement and politicians from lying to us. The Bush Administration lied about the extent of the NSA’s warrantless wiretapping program. The TSA lied about the ability of full-body scanners to save naked images of people. We’ve been lied to about the lethality of tasers, when and how the FBI eavesdrops on cell-phone calls, and about the existence of surveillance records. Without transparency, we would never know.
A decade ago, the FBI was heavily lobbying Congress for a law to give it new wiretapping powers: a law known as CALEA. One of its key justifications was that existing law didn’t allow it to perform speedy wiretaps during kidnapping investigations. It sounded plausible—and who wouldn’t feel sympathy for kidnapping victims?—but when civil-liberties organizations analyzed the actual data, they found that it was just a story; there were no instances of wiretapping in kidnapping investigations. Without transparency, we would never have known that the FBI was making up stories to scare Congress.
If we’re going to give the government any new powers, we need to ensure that there’s oversight. Sometimes this oversight is before action occurs. Warrants are a great example. Sometimes they’re after action occurs: public reporting, audits by inspector generals, open hearings, notice to those affected, or some other mechanism. Too often, law enforcement tries to exempt itself from this principle by supporting laws that are specifically excused from oversight…or by establishing secret courts that just rubber-stamp government wiretapping requests.
Furthermore, we need to ensure that mechanisms for accountability have teeth and are used.
As we respond to the threat of terrorism, we must remember that there are other threats as well. A society without transparency and accountability is the very definition of a police state. And while a police state might have a low crime rate—especially if you don’t define police corruption and other abuses of power as crime—and an even lower terrorism rate, it’s not a society that most of us would willingly choose to live in.
We already give law enforcement enormous power to intrude into our lives. We do this because we know they need this power to catch criminals, and we’re all safer thereby. But because we recognize that a powerful police force is itself a danger to society, we must temper this power with transparency and accountability.
This essay previously appeared on TheAtlantic.com.
Maybe the tide is turning:
America is in a hole. The last response of the blowhards and cowards who have put it there is always: “So what would you do: set them free?” Our answer remains, yes. There is clearly a risk that some of them would then commit some act of violence—in Yemen, elsewhere in the Middle East or even in America itself. That risk can be lessened by surveillance. But even if another outrage were to happen, the evil of “Gitmo” has recruited far more people to terrorism than a mere 166. Mr Obama should think about America’s founding principles, take out his pen and end this stain on its history.
I agree 100%.
This isn’t the first time people have pointed out that our politics are creating more terrorists than they’re killing—especially our drone strikes—but I don’t expect this sort of security trade-off analysis from the Economist.
In May, neuroscientist and popular author Sam Harris and I debated the issue of profiling Muslims at airport security. We each wrote essays, then went back and forth on the issue. I don’t recommend reading the entire discussion; we spent 14,000 words talking past each other. But what’s interesting is how our debate illustrates the differences between a security engineer and an intelligent layman. Harris was uninterested in the detailed analysis required to understand a security system and unwilling to accept that security engineering is a specialized discipline with a body of knowledge and relevant expertise. He trusted his intuition.
Many people have researched how intuition fails us in security: Paul Slovic and Bill Burns on risk perception, Daniel Kahneman on cognitive biases in general, Rick Walsh on folk computer-security models. I’ve written about the psychology of security, and Daniel Gartner has written more. Basically, our intuitions are based on things like antiquated fight-or-flight models, and these increasingly fail in our technological world.
This problem isn’t unique to computer security, or even security in general. But this misperception about security matters now more than it ever has. We’re no longer asking people to make security choices only for themselves and their businesses; we need them to make security choices as a matter of public policy. And getting it wrong has increasingly bad consequences.
Computers and the Internet have collided with public policy. The entertainment industry wants to enforce copyright. Internet companies want to continue freely spying on users. Law-enforcement wants its own laws imposed on the Internet: laws that make surveillance easier, prohibit anonymity, mandate the removal of objectionable images and texts, and require ISPs to retain data about their customers’ Internet activities. Militaries want laws regarding cyber weapons, laws enabling wholesale surveillance, and laws mandating an Internet kill switch. “Security” is now a catch-all excuse for all sorts of authoritarianism, as well as for boondoggles and corporate profiteering.
Cory Doctorow recently spoke about the coming war on general-purpose computing. I talked about it in terms of the entertainment industry and Jonathan Zittrain discussed it more generally, but Doctorow sees it as a much broader issue. Preventing people from copying digital files is only the first skirmish; just wait until the DEA wants to prevent chemical printers from making certain drugs, or the FBI wants to prevent 3D printers from making guns.
I’m not here to debate the merits of any of these policies, but instead to point out that people will debate them. Elected officials will be expected to understand security implications, both good and bad, and will make laws based on that understanding. And if they aren’t able to understand security engineering, or even accept that there is such a thing, the result will be ineffective and harmful policies.
So what do we do? We need to establish security engineering as a valid profession in the minds of the public and policy makers. This is less about certifications and (heaven forbid) licensing, and more about perception—and cultivating a security mindset. Amateurs produce amateur security, which costs more in dollars, time, liberty, and dignity while giving us less—or even no—security. We need everyone to know that.
We also need to engage with real-world security problems, and apply our expertise to the variety of technical and socio-technical systems that affect broader society. Everything involves computers, and almost everything involves the Internet. More and more, computer security is security.
Finally, and perhaps most importantly, we need to learn how to talk about security engineering to a non-technical audience. We need to convince policy makers to follow a logical approach instead of an emotional one—an approach that includes threat modeling, failure analysis, searching for unintended consequences, and everything else in an engineer’s approach to design. Powerful lobbying forces are attempting to force security policies on society, largely for non-security reasons, and sometimes in secret. We need to stand up for security.
A shorter version of this essay appeared in the September/October 2012 issue of IEEE Security & Privacy.
Rand Paul has introduced legislation to rein in the TSA. There are two bills:
One bill would require that the mostly federalized program be turned over to private screeners and allow airports with Department of Homeland Security approval to select companies to handle the work.
This seems to be a result of a fundamental misunderstanding of the economic incentives involved here, combined with magical thinking that a market solution solves all. In airport screening, the passenger isn’t the customer. (Technically he is, but only indirectly.) The airline isn’t even the customer. The customer is the U.S. government, which is in the grip of an irrational fear of terrorism.
It doesn’t matter if an airport screener receives a paycheck signed by the Department of the Treasury or Private Airport Screening Services, Inc. As long as a terrorized government—one that needs to be seen by voters as “tough on terror” and wants to stop every terrorist attack, regardless of the cost, and is willing to sacrifice all for the illusion of security—gets to set the security standards, we’re going to get TSA-style security.
We can put the airlines, either directly or via airport fees, in charge of security, but that has problems in the other direction. Airlines don’t really care about terrorism; it’s rare, the costs to the airline are relatively small (remember that the government bailed the industry out after 9/11), and the rest of the costs are externalities and are borne by other people. So if airlines are in charge, we’re likely to get less security than makes sense.
It makes sense for a government to be in charge of airport security—either directly or by setting standards for contractors to follow, I don’t care—but we’ll only get sensible security when the government starts behaving sensibly.
The second bill would permit travelers to opt out of pat-downs and be rescreened, allow them to call a lawyer when detained, increase the role of dogs in explosive detection, let passengers “appropriately object to mistreatment,” allow children 12 years old and younger to avoid “unnecessary pat-downs” and require the distribution of the new rights at airports.
That legislation also would let airports decide to privatize if wanted and expand TSA’s PreCheck program for trusted travelers.
This is a mixed bag. Airports can already privatize security—SFO has done so already—and TSA’s PreCheck is being expanded. Opting out of pat downs and being rescreened only makes sense if the pat down request was the result of an anomaly in the screening process; my guess is that rescreening will just produce the same anomaly and still require a pat down. The right to call a lawyer when detained is a good one, although in reality we passengers just want to make our flights; that’s why we let ourselves be subjected to this sort of treatment at airports. And the phrase “unnecessary pat-downs” all comes down to what is considered necessary. If a 12-year-old goes through a full-body scanner and a gun-shaped image shows up on the screen, is the subsequent pat down necessary? What if it’s a long and thin image? What if he goes through a metal detector and it beeps? And who gets to decide what’s necessary? If it’s the TSA, nothing will change.
And dogs: a great idea, but a logistical nightmare. Dogs require space to eat, sleep, run, poop, and so on. They just don’t fit into your typical airport setup.
The problem isn’t government-run airport security, full-body scanners, the screening of children and the elderly, or even a paucity of dogs. The problem is that we were so terrorized that we demanded our government keep us safe at all costs. The problem is that our government was so terrorized after 9/11 that it gave an enormous amount of power to our security organizations. The problem is that the security-industrial complex has gotten large and powerful—and good at advancing its agenda—and that we’ve scared our public officials into being so scared that they don’t notice when security goes too far.
I too want to rein in the TSA, but the only way to do that is to change the TSA’s mission. And the only way to do that is to change the government that gives the TSA its mission. We need to refuse to be terrorized, and we need to elect non-terrorized legislators.
But that’s a long way off. In the near term, I’d like to see legislation that forces the TSA, the DHS, and anyone working in counterterrorism, to justify their systems, procedures, and expenditures with cost-benefit analyses.
This is me on that issue:
An even more meaningful response to any of these issues would be to perform a cost-benefit analysis. These sorts of analyses are standard, even with regard to rare risks, but the TSA (and, in fact, the whole Department of Homeland Security) has never conducted them on any of its programmes or technologies. It’s incredible but true: he TSA does not analyse whether the security measures it deploys are worth deploying. In 2010, the National Academies of Science wrote a pretty damning report on this topic.
Filling in where the TSA and the DHS have left a void, academics have performed some cost-benefit analyses on specific airline-security measures. The results are pretty much what you would expect: the security benefits of most post-9/11 security changes do not justify the costs.
More on security cost-benefit analyses here and here. It’s not going to magically dismantle the security-industrial complex, eliminate the culture of fear, or imbue our elected officials with common sense—but it’s a start.
EDITED TO ADD (7/13): A rebuttal to my essay. It’s too insulting to respond directly to, but there are points worth debating.
Sidebar photo of Bruce Schneier by Joe MacInnis.