Entries Tagged "essays"

Page 38 of 47

Risks of Data Reuse

We learned the news in March: Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.

The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.

New research proves they were lying.

The whole incident serves as a poignant illustration of one of the thorniest problems of the information age: data collected for one purpose and then used for another, or “data reuse.”

When we think about our personal data, what bothers us most is generally not the initial collection and use, but the secondary uses. I personally appreciate it when Amazon.com suggests books that might interest me, based on books I have already bought. I like it that my airline knows what type of seat and meal I prefer, and my hotel chain keeps records of my room preferences. I don’t mind that my automatic road-toll collection tag is tied to my credit card, and that I get billed automatically. I even like the detailed summary of my purchases that my credit card company sends me at the end of every year. What I don’t want, though, is any of these companies selling that data to brokers, or for law enforcement to be allowed to paw through those records without a warrant.

There are two bothersome issues about data reuse. First, we lose control of our data. In all of the examples above, there is an implied agreement between the data collector and me: It gets the data in order to provide me with some sort of service. Once the data collector sells it to a broker, though, it’s out of my hands. It might show up on some telemarketer’s screen, or in a detailed report to a potential employer, or as part of a data-mining system to evaluate my personal terrorism risk. It becomes part of my data shadow, which always follows me around but I can never see.

This, of course, affects our willingness to give up personal data in the first place. The reason U.S. census data was declared off-limits for other uses was to placate Americans’ fears and assure them that they could answer questions truthfully. How accurate would you be in filling out your census forms if you knew the FBI would be mining the data, looking for terrorists? How would it affect your supermarket purchases if you knew people were examining them and making judgments about your lifestyle? I know many people who engage in data poisoning: deliberately lying on forms in order to propagate erroneous data. I’m sure many of them would stop that practice if they could be sure that the data was only used for the purpose for which it was collected.

The second issue about data reuse is error rates. All data has errors, and different uses can tolerate different amounts of error. The sorts of marketing databases you can buy on the web, for example, are notoriously error-filled. That’s OK; if the database of ultra-affluent Americans of a particular ethnicity you just bought has a 10 percent error rate, you can factor that cost into your marketing campaign. But that same database, with that same error rate, might be useless for law enforcement purposes.

Understanding error rates and how they propagate is vital when evaluating any system that reuses data, especially for law enforcement purposes. A few years ago, the Transportation Security Administration’s follow-on watch list system, Secure Flight, was going to use commercial data to give people a terrorism risk score and determine how much they were going to be questioned or searched at the airport. People rightly rebelled against the thought of being judged in secret, but there was much less discussion about whether the commercial data from credit bureaus was accurate enough for this application.

An even more egregious example of error-rate problems occurred in 2000, when the Florida Division of Elections contracted with Database Technologies (since merged with ChoicePoint) to remove convicted felons from the voting rolls. The databases used were filled with errors and the matching procedures were sloppy, which resulted in thousands of disenfranchised voters—mostly black—and almost certainly changed a presidential election result.

Of course, there are beneficial uses of secondary data. Take, for example, personal medical data. It’s personal and intimate, yet valuable to society in aggregate. Think of what we could do with a database of everyone’s health information: massive studies examining the long-term effects of different drugs and treatment options, different environmental factors, different lifestyle choices. There’s an enormous amount of important research potential hidden in that data, and it’s worth figuring out how to get at it without compromising individual privacy.

This is largely a matter of legislation. Technology alone can never protect our rights. There are just too many reasons not to trust it, and too many ways to subvert it. Data privacy ultimately stems from our laws, and strong legal protections are fundamental to protecting our information against abuse. But at the same time, technology is still vital.

Both the Japanese internment and the Florida voting-roll purge demonstrate that laws can change … and sometimes change quickly. We need to build systems with privacy-enhancing technologies that limit data collection wherever possible. Data that is never collected cannot be reused. Data that is collected anonymously, or deleted immediately after it is used, is much harder to reuse. It’s easy to build systems that collect data on everything—it’s what computers naturally do—but it’s far better to take the time to understand what data is needed and why, and only collect that.

History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people’s freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.

This article originally appeared on Wired.com

Posted on June 28, 2007 at 8:34 AMView Comments

Portrait of the Modern Terrorist as an Idiot

The recently publicized terrorist plot to blow up John F. Kennedy International Airport, like so many of the terrorist plots over the past few years, is a study in alarmism and incompetence: on the part of the terrorists, our government and the press.

Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots—and worse, allowing our essential freedoms to be lost by using them as an excuse—is wrong.

The alleged plan, to blow up JFK’s fuel tanks and a small segment of the 40-mile petroleum pipeline that supplies the airport, was ridiculous. The fuel tanks are thick-walled, making them hard to damage. The airport tanks are separated from the pipelines by cutoff valves, so even if a fire broke out at the tanks, it would not back up into the pipelines. And the pipeline couldn’t blow up in any case, since there’s no oxygen to aid combustion. Not that the terrorists ever got to the stage—or demonstrated that they could get there—where they actually obtained explosives. Or even a current map of the airport’s infrastructure.

But read what Russell Defreitas, the lead terrorist, had to say: “Anytime you hit Kennedy, it is the most hurtful thing to the United States. To hit John F. Kennedy, wow…. They love JFK—he’s like the man. If you hit that, the whole country will be in mourning. It’s like you can kill the man twice.”

If these are the terrorists we’re fighting, we’ve got a pretty incompetent enemy.

You couldn’t tell that from the press reports, though. “The devastation that would be caused had this plot succeeded is just unthinkable,” U.S. Attorney Roslynn R. Mauskopf said at a news conference, calling it “one of the most chilling plots imaginable.” Sen. Arlen Specter (R-Pennsylvania) added, “It had the potential to be another 9/11.”

These people are just as deluded as Defreitas.

The only voice of reason out there seemed to be New York’s Mayor Michael Bloomberg, who said: “There are lots of threats to you in the world. There’s the threat of a heart attack for genetic reasons. You can’t sit there and worry about everything. Get a life…. You have a much greater danger of being hit by lightning than being struck by a terrorist.”

And he was widely excoriated for it.

This isn’t the first time a bunch of incompetent terrorists with an infeasible plot have been painted by the media as poised to do all sorts of damage to America. In May we learned about a six-man plan to stage an attack on Fort Dix by getting in disguised as pizza deliverymen and shooting as many soldiers and Humvees as they could, then retreating without losses to fight again another day. Their plan, such as it was, went awry when they took a videotape of themselves at weapons practice to a store for duplication and transfer to DVD. The store clerk contacted the police, who in turn contacted the FBI. (Thank you to the video store clerk for not overreacting, and to the FBI agent for infiltrating the group.)

The “Miami 7,” caught last year for plotting—among other things—to blow up the Sears Tower, were another incompetent group: no weapons, no bombs, no expertise, no money and no operational skill. And don’t forget Iyman Faris, the Ohio trucker who was convicted in 2003 for the laughable plot to take out the Brooklyn Bridge with a blowtorch. At least he eventually decided that the plan was unlikely to succeed.

I don’t think these nut jobs, with their movie-plot threats, even deserve the moniker “terrorist.” But in this country, while you have to be competent to pull off a terrorist attack, you don’t have to be competent to cause terror. All you need to do is start plotting an attack and—regardless of whether or not you have a viable plan, weapons or even the faintest clue—the media will aid you in terrorizing the entire population.

The most ridiculous JFK Airport-related story goes to the New York Daily News, with its interview with a waitress who served Defreitas salmon; the front-page headline blared, “Evil Ate at Table Eight.”

Following one of these abortive terror misadventures, the administration invariably jumps on the news to trumpet whatever ineffective “security” measure they’re trying to push, whether it be national ID cards, wholesale National Security Agency eavesdropping or massive data mining. Never mind that in all these cases, what caught the bad guys was old-fashioned police work—the kind of thing you’d see in decades-old spy movies.

The administration repeatedly credited the apprehension of Faris to the NSA’s warrantless eavesdropping programs, even though it’s just not true. The 9/11 terrorists were no different; they succeeded partly because the FBI and CIA didn’t follow the leads before the attacks.

Even the London liquid bombers were caught through traditional investigation and intelligence, but this doesn’t stop Secretary of Homeland Security Michael Chertoff from using them to justify (.pdf) access to airline passenger data.

Of course, even incompetent terrorists can cause damage. This has been repeatedly proven in Israel, and if shoe-bomber Richard Reid had been just a little less stupid and ignited his shoes in the lavatory, he might have taken out an airplane.

So these people should be locked up … assuming they are actually guilty, that is. Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it’s unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn’t have ordinarily done. The Miami gang’s Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

The rest of them stink of exaggeration. Jose Padilla was not actually prepared to detonate a dirty bomb in the United States, despite histrionic administration claims to the contrary. Now that the trial is proceeding, the best the government can charge him with is conspiracy to murder, kidnap and maim, and it seems unlikely that the charges will stick. An alleged ringleader of the U.K. liquid bombers, Rashid Rauf, had charges of terrorism dropped for lack of evidence (of the 25 arrested, only 16 were charged). And now it seems like the JFK mastermind was more talk than action, too.

Remember the “Lackawanna Six,” those terrorists from upstate New York who pleaded guilty in 2003 to “providing support or resources to a foreign terrorist organization”? They entered their plea because they were threatened with being removed from the legal system altogether. We have no idea if they were actually guilty, or of what.

Even under the best of circumstances, these are difficult prosecutions. Arresting people before they’ve carried out their plans means trying to prove intent, which rapidly slips into the province of thought crime. Regularly the prosecution uses obtuse religious literature in the defendants’ homes to prove what they believe, and this can result in courtroom debates on Islamic theology. And then there’s the issue of demonstrating a connection between a book on a shelf and an idea in the defendant’s head, as if your reading of this article—or purchasing of my book—proves that you agree with everything I say. (The Atlantic recently published a fascinating article on this.)

I’ll be the first to admit that I don’t have all the facts in any of these cases. None of us do. So let’s have some healthy skepticism. Skepticism when we read about these terrorist masterminds who were poised to kill thousands of people and do incalculable damage. Skepticism when we’re told that their arrest proves that we need to give away our own freedoms and liberties. And skepticism that those arrested are even guilty in the first place.

There is a real threat of terrorism. And while I’m all in favor of the terrorists’ continuing incompetence, I know that some will prove more capable. We need real security that doesn’t require us to guess the tactic or the target: intelligence and investigation—the very things that caught all these terrorist wannabes—and emergency response. But the “war on terror” rhetoric is more politics than rationality. We shouldn’t let the politics of fear make us less safe.

This essay originally appeared on Wired.com.

EDITED TO ADD (6/14): Another essay on the topic.

Posted on June 14, 2007 at 8:28 AMView Comments

Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AMView Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AMView Comments

Tactics, Targets, and Objectives

If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don’t run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can’t climb trees. But don’t do that if you’re being chased by an elephant; he’ll just knock the tree down. Stand still until he forgets about you.

I spent the last few days on safari in a South African game park, and this was just some of the security advice we were all given. What’s interesting about this advice is how well-defined it is. The defenses might not be terribly effective—you still might get eaten, gored or trampled—but they’re your best hope. Doing something else isn’t advised, because animals do the same things over and over again. These are security countermeasures against specific tactics.

Lions and leopards learn tactics that work for them, and I was taught tactics to defend myself. Humans are intelligent, and that means we are more adaptable than animals. But we’re also, generally speaking, lazy and stupid; and, like a lion or hyena, we will repeat tactics that work. Pickpockets use the same tricks over and over again. So do phishers, and school shooters. If improvised explosive devices didn’t work often enough, Iraqi insurgents would do something else.

So security against people generally focuses on tactics as well.

A friend of mine recently asked me where she should hide her jewelry in her apartment, so that burglars wouldn’t find it. Burglars tend to look in the same places all the time—dresser tops, night tables, dresser drawers, bathroom counters—so hiding valuables somewhere else is more likely to be effective, especially against a burglar who is pressed for time. Leave decoy cash and jewelry in an obvious place so a burglar will think he’s found your stash and then leave. Again, there’s no guarantee of success, but it’s your best hope.

The key to these countermeasures is to find the pattern: the common attack tactic that is worth defending against. That takes data. A single instance of an attack that didn’t work—liquid bombs, shoe bombs—or one instance that did—9/11—is not a pattern. Implementing defensive tactics against them is the same as my safari guide saying: “We’ve only ever heard of one tourist encountering a lion. He stared it down and survived. Another tourist tried the same thing with a leopard, and he got eaten. So when you see a lion….” The advice I was given was based on thousands of years of collective wisdom from people encountering African animals again and again.

Compare this with the Transportation Security Administration’s approach. With every unique threat, TSA implements a countermeasure with no basis to say that it helps, or that the threat will ever recur.

Furthermore, human attackers can adapt more quickly than lions. A lion won’t learn that he should ignore people who stare him down, and eat them anyway. But people will learn. Burglars now know the common “secret” places people hide their valuables—the toilet, cereal boxes, the refrigerator and freezer, the medicine cabinet, under the bed—and look there. I told my friend to find a different secret place, and to put decoy valuables in a more obvious place.

This is the arms race of security. Common attack tactics result in common countermeasures. Eventually, those countermeasures will be evaded and new attack tactics developed. These, in turn, require new countermeasures. You can easily see this in the constant arms race that is credit card fraud, ATM fraud or automobile theft.

The result of these tactic-specific security countermeasures is to make the attacker go elsewhere. For the most part, the attacker doesn’t particularly care about the target. Lions don’t care who or what they eat; to a lion, you’re just a conveniently packaged bag of protein. Burglars don’t care which house they rob, and terrorists don’t care who they kill. If your countermeasure makes the lion attack an impala instead of you, or if your burglar alarm makes the burglar rob the house next door instead of yours, that’s a win for you.

Tactics matter less if the attacker is after you personally. If, for example, you have a priceless painting hanging in your living room and the burglar knows it, he’s not going to rob the house next door instead—even if you have a burglar alarm. He’s going to figure out how to defeat your system. Or he’ll stop you at gunpoint and force you to open the door. Or he’ll pose as an air-conditioner repairman. What matters is the target, and a good attacker will consider a variety of tactics to reach his target.

This approach requires a different kind of countermeasure, but it’s still well-understood in the security world. For people, it’s what alarm companies, insurance companies and bodyguards specialize in. President Bush needs a different level of protection against targeted attacks than Bill Gates does, and I need a different level of protection than either of them. It would be foolish of me to hire bodyguards in case someone was targeting me for robbery or kidnapping. Yes, I would be more secure, but it’s not a good security trade-off.

Al-Qaida terrorism is different yet again. The goal is to terrorize. It doesn’t care about the target, but it doesn’t have any pattern of tactic, either. Given that, the best way to spend our counterterrorism dollar is on intelligence, investigation and emergency response. And to refuse to be terrorized.

These measures are effective because they don’t assume any particular tactic, and they don’t assume any particular target. We should only apply specific countermeasures when the cost-benefit ratio makes sense (reinforcing airplane cockpit doors) or when a specific tactic is repeatedly observed (lions attacking people who don’t stare them down). Otherwise, general countermeasures are far more effective a defense.

This essay originally appeared on Wired.com.

EDITED TO ADD (6/14): Learning behavior in tigers.

Posted on May 31, 2007 at 6:11 AMView Comments

Rare Risk and Overreactions

Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.

A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the police when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters—a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.

These things all happened, even though shootings like this are incredibly rare; even though—for all the press—less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.

The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren’t very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There’s a lot of research in the psychological community about how the brain responds to risk—some of it I have already written about—but the gist is this: Our brains are much better at processing the simple risks we’ve had to deal with throughout most of our species’ existence, and much poorer at evaluating the complex risks society forces us to face today.

Novelty plus dread equals overreaction.

We can see the effects of this all the time. We fear being murdered, kidnapped, raped and assaulted by strangers, when it’s far more likely that the perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence—both far more common.

In the United States, dogs, snakes, bees and pigs each kill more people per year (.pdf) than sharks. In fact, dogs kill more humans than any animal except for other humans. Sharks are more dangerous than dogs, yes, but we’re far more likely to encounter dogs than sharks.

Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota—where I live—in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: “There were no terrorist attacks in the two years preceding 9/11, and you didn’t have any policies. What does that prove?”

What it proves is that terrorist attacks are very rare, and maybe our reaction wasn’t worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it’s security theater, but it makes us feel safer.

People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.

We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television. (Nassim Nicholas Taleb’s great book, The Black Swan: The Impact of the Highly Improbable, discusses this.)

Consider the reaction to another event from last month: professional baseball player Josh Hancock got drunk and died in a car crash. As a result, several baseball teams are banning alcohol in their clubhouses after games. Aside from this being a ridiculous reaction to an incredibly rare event (2,430 baseball games per season, 35 people per clubhouse, two clubhouses per game. And how often has this happened?), it makes no sense as a solution. Hancock didn’t get drunk in the clubhouse; he got drunk at a bar. But Major League Baseball needs to be seen as doing something, even if that something doesn’t make sense—even if that something actually increases risk by forcing players to drink at bars instead of at the clubhouse, where there’s more control over the practice.

I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news—car crashes, domestic violence—that you should start worrying.

But that’s not the way we think. Psychologist Scott Plous said it well in The Psychology of Judgment and Decision Making: “In very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear causal.”

So, when faced with a very available and highly vivid event like 9/11 or the Virginia Tech shootings, we overreact. And when faced with all the salient related events, we assume causality. We pass the Patriot Act. We think if we give guns out to students, or maybe make it harder for students to get guns, we’ll have solved the problem. We don’t let our children go to playgrounds unsupervised. We stay out of the ocean because we read about a shark attack somewhere.

It’s our brains again. We need to “do something,” even if that something doesn’t make sense; even if it is ineffective. And we need to do something directly related to the details of the actual event. So instead of implementing effective, but more general, security measures to reduce the risk of terrorism, we ban box cutters on airplanes. And we look back on the Virginia Tech massacre with 20-20 hindsight and recriminate ourselves about the things we should have done.

Lastly, our brains need to find someone or something to blame. (Jon Stewart has an excellent bit on the Virginia Tech scapegoat search, and media coverage in general.) But sometimes there is no scapegoat to be found; sometimes we did everything right, but just got unlucky. We simply can’t prevent a lone nutcase from shooting people at random; there’s no security measure that would work.

As circular as it sounds, rare events are rare primarily because they don’t occur very often, and not because of any preventive security measures. And implementing security measures to make these rare events even rarer is like the joke about the guy who stomps around his house to keep the elephants away.

“Elephants? There are no elephants in this neighborhood,” says a neighbor.

“See how well it works!”

If you want to do something that makes security sense, figure out what’s common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.

This essay originally appeared on Wired.com, my 42nd essay on that site.

EDITED TO ADD (6/5): Archiloque has translated this essay into French.

EDITED TO ADD (6/14): The British academic risk researcher Prof. John Adams wrote an insightful essay on this topic called “What Kills You Matters—Not Numbers.”

Posted on May 17, 2007 at 2:16 PMView Comments

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in 1984 was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

1984‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

1984‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

1984-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

1984-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of 1984 was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie Brazil and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society.We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and—yes—even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 11, 2007 at 9:19 AMView Comments

Do We Really Need a Security Industry?

Last week I attended the Infosecurity Europe conference in London. Like at the RSA Conference in February, the show floor was chockablock full of network, computer and information security companies. As I often do, I mused about what it means for the IT industry that there are thousands of dedicated security products on the market: some good, more lousy, many difficult even to describe. Why aren’t IT products and services naturally secure, and what would it mean for the industry if they were?

I mentioned this in an interview with Silicon.com, and the published article seems to have caused a bit of a stir. Rather than letting people wonder what I really meant, I thought I should explain.

The primary reason the IT security industry exists is because IT products and services aren’t naturally secure. If computers were already secure against viruses, there wouldn’t be any need for antivirus products. If bad network traffic couldn’t be used to attack computers, no one would bother buying a firewall. If there were no more buffer overflows, no one would have to buy products to protect against their effects. If the IT products we purchased were secure out of the box, we wouldn’t have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn’t help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity—companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later. Their profits would rise in step with the overall level of security on the internet. Initially we’d still be spending a comparable amount of money per year on security—on secure development practices, on embedded security and so on—but some of that money would be going into improving the quality of the IT products we’re buying, and would reduce the amount we spend on security in future years.

I know this is a utopian vision that I probably won’t see in my lifetime, but the IT services market is pushing us in this direction. As IT becomes more of a utility, users are going to buy a whole lot more services than products. And by nature, services are more about results than technologies. Service customers—whether home users or multinational corporations—care less and less about the specifics of security technologies, and increasingly expect their IT to be integrally secure.

Eight years ago, I formed Counterpane Internet Security on the premise that end users (big corporate users, in this case) really don’t want to have to deal with network security. They want to fly airplanes, produce pharmaceuticals or do whatever their core business is. They don’t want to hire the expertise to monitor their network security, and will gladly farm it out to a company that can do it for them. We provided an array of services that took day-to-day security out of the hands of our customers: security monitoring, security-device management, incident response. Security was something our customers purchased, but they purchased results, not details.

Last year BT bought Counterpane, further embedding network security services into the IT infrastructure. BT has customers that don’t want to deal with network management at all; they just want it to work. They want the internet to be like the phone network, or the power grid, or the water system; they want it to be a utility. For these customers, security isn’t even something they purchase: It’s one small part of a larger IT services deal. It’s the same reason IBM bought ISS: to be able to have a more integrated solution to sell to customers.

This is where the IT industry is headed, and when it gets there, there’ll be no point in user conferences like Infosec and RSA. They won’t go away; they’ll simply become industry conferences. If you want to measure progress, look at the demographics of these conferences. A shift toward infrastructure-geared attendees is a measure of success.

Of course, security products won’t disappear—at least, not in my lifetime. There’ll still be firewalls, antivirus software and everything else. There’ll still be startup companies developing clever and innovative security technologies. But the end user won’t care about them. They’ll be embedded within the services sold by large IT outsourcing companies like BT, EDS and IBM, or ISPs like EarthLink and Comcast. Or they’ll be a check-box item somewhere in the core switch.

IT security is getting harder—increasing complexity is largely to blame—and the need for aftermarket security products isn’t disappearing anytime soon. But there’s no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it’s helpful in spotting SQL injection attacks. The whole IT security industry is an accident—an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work—and the details of how it works won’t matter.

This was my 41st essay for Wired.com.

EDITED TO ADD (5/3): Commentary.

EDITED TO ADD (5/4): More commentary.

EDITED TO ADD (5/10): More commentary.

Posted on May 3, 2007 at 10:09 AMView Comments

1 36 37 38 39 40 47

Sidebar photo of Bruce Schneier by Joe MacInnis.