Blog: May 2007 Archives

Tactics, Targets, and Objectives

If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don’t run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can’t climb trees. But don’t do that if you’re being chased by an elephant; he’ll just knock the tree down. Stand still until he forgets about you.

I spent the last few days on safari in a South African game park, and this was just some of the security advice we were all given. What’s interesting about this advice is how well-defined it is. The defenses might not be terribly effective—you still might get eaten, gored or trampled—but they’re your best hope. Doing something else isn’t advised, because animals do the same things over and over again. These are security countermeasures against specific tactics.

Lions and leopards learn tactics that work for them, and I was taught tactics to defend myself. Humans are intelligent, and that means we are more adaptable than animals. But we’re also, generally speaking, lazy and stupid; and, like a lion or hyena, we will repeat tactics that work. Pickpockets use the same tricks over and over again. So do phishers, and school shooters. If improvised explosive devices didn’t work often enough, Iraqi insurgents would do something else.

So security against people generally focuses on tactics as well.

A friend of mine recently asked me where she should hide her jewelry in her apartment, so that burglars wouldn’t find it. Burglars tend to look in the same places all the time—dresser tops, night tables, dresser drawers, bathroom counters—so hiding valuables somewhere else is more likely to be effective, especially against a burglar who is pressed for time. Leave decoy cash and jewelry in an obvious place so a burglar will think he’s found your stash and then leave. Again, there’s no guarantee of success, but it’s your best hope.

The key to these countermeasures is to find the pattern: the common attack tactic that is worth defending against. That takes data. A single instance of an attack that didn’t work—liquid bombs, shoe bombs—or one instance that did—9/11—is not a pattern. Implementing defensive tactics against them is the same as my safari guide saying: “We’ve only ever heard of one tourist encountering a lion. He stared it down and survived. Another tourist tried the same thing with a leopard, and he got eaten. So when you see a lion….” The advice I was given was based on thousands of years of collective wisdom from people encountering African animals again and again.

Compare this with the Transportation Security Administration’s approach. With every unique threat, TSA implements a countermeasure with no basis to say that it helps, or that the threat will ever recur.

Furthermore, human attackers can adapt more quickly than lions. A lion won’t learn that he should ignore people who stare him down, and eat them anyway. But people will learn. Burglars now know the common “secret” places people hide their valuables—the toilet, cereal boxes, the refrigerator and freezer, the medicine cabinet, under the bed—and look there. I told my friend to find a different secret place, and to put decoy valuables in a more obvious place.

This is the arms race of security. Common attack tactics result in common countermeasures. Eventually, those countermeasures will be evaded and new attack tactics developed. These, in turn, require new countermeasures. You can easily see this in the constant arms race that is credit card fraud, ATM fraud or automobile theft.

The result of these tactic-specific security countermeasures is to make the attacker go elsewhere. For the most part, the attacker doesn’t particularly care about the target. Lions don’t care who or what they eat; to a lion, you’re just a conveniently packaged bag of protein. Burglars don’t care which house they rob, and terrorists don’t care who they kill. If your countermeasure makes the lion attack an impala instead of you, or if your burglar alarm makes the burglar rob the house next door instead of yours, that’s a win for you.

Tactics matter less if the attacker is after you personally. If, for example, you have a priceless painting hanging in your living room and the burglar knows it, he’s not going to rob the house next door instead—even if you have a burglar alarm. He’s going to figure out how to defeat your system. Or he’ll stop you at gunpoint and force you to open the door. Or he’ll pose as an air-conditioner repairman. What matters is the target, and a good attacker will consider a variety of tactics to reach his target.

This approach requires a different kind of countermeasure, but it’s still well-understood in the security world. For people, it’s what alarm companies, insurance companies and bodyguards specialize in. President Bush needs a different level of protection against targeted attacks than Bill Gates does, and I need a different level of protection than either of them. It would be foolish of me to hire bodyguards in case someone was targeting me for robbery or kidnapping. Yes, I would be more secure, but it’s not a good security trade-off.

Al-Qaida terrorism is different yet again. The goal is to terrorize. It doesn’t care about the target, but it doesn’t have any pattern of tactic, either. Given that, the best way to spend our counterterrorism dollar is on intelligence, investigation and emergency response. And to refuse to be terrorized.

These measures are effective because they don’t assume any particular tactic, and they don’t assume any particular target. We should only apply specific countermeasures when the cost-benefit ratio makes sense (reinforcing airplane cockpit doors) or when a specific tactic is repeatedly observed (lions attacking people who don’t stare them down). Otherwise, general countermeasures are far more effective a defense.

This essay originally appeared on Wired.com.

EDITED TO ADD (6/14): Learning behavior in tigers.

Posted on May 31, 2007 at 6:11 AM64 Comments

Counterfeiting Is not Terrorism

This is a surreal story of someone who was chained up for hours for trying to spend $2 bills. Clerks at Best Buy thought the bills were counterfeit, and had him arrested.

The most surreal quote of the article is the last sentence:

Commenting on the incident, Baltimore County police spokesman Bill Toohey told the Sun: “It’s a sign that we’re all a little nervous in the post-9/11 world.”

What in the world do the terrorist attacks of 9/11 have to do with counterfeiting? How does being “a little nervous in the post-9/11 world” have anything to do with this incident? Counterfeiting is not terrorism; it isn’t even a little bit like terrorism.

EDITED TO ADD (5/30): The story is from 2005.

Posted on May 30, 2007 at 1:03 PM56 Comments

RFID in People Access Security Services (PASS) Cards

Last November, the Data Privacy and Integrity Advisory Committee of the Department of Homeland Security recommended against putting RFID chips in identity cards. DHS ignored them, and went ahead with the project anyway. Now, the Smart Card Alliance is criticizing the DHS’s RFID program for cross-border identification, basically saying that it is making the very mistakes the Data Privacy and Integrity Advisory Committee warned about.

Posted on May 30, 2007 at 6:50 AM17 Comments

Department of Homeland Security Not Focused on Terrorism

I thought terrorism is why we have a DHS, but they’ve been preoccupied with other things:

Of the 814,073 people charged by DHS in immigration courts during the past three years, 12 faced charges of terrorism, TRAC said.

Those 12 cases represent 0.0015 percent of the total number of cases filed.

“The DHS claims it is focused on terrorism. Well that’s just not true,” said David Burnham, a TRAC spokesman. “Either there’s no terrorism, or they’re terrible at catching them. Either way it’s bad for all of us.”

The TRAC analysis also found that DHS filed a minuscule number of what are called “national security” charges against people in the immigration courts. The report stated that 114, or 0.014 percent of the total of roughly 800,000 individuals charged were charged with national security violations.

TRAC reported more than 85 percent of the charges involved more common immigration violations such as not having a valid immigrant visa, overstaying a student visa or entering the United States without an inspection.

TRAC is a great group, and I recommend wandering around their site if you’re interested in what the U.S. government is actually doing.

Posted on May 29, 2007 at 1:59 PM42 Comments

Criminals Hijack Large Web Hosting Firm

Nasty attack.

IPOWER declined a phone interview for this story. But the company acknowledged in an e-mail that “over the past three months our servers were targeted. We take this situation very seriously and a diligent cleanup effort has been underway for many months already. We saw the StopBadware report on the day it came out and went to download the list to sweep it as quickly as possible. By looking at the list, it was evident that our cleanup efforts were already helping significantly. By the time we downloaded the list, there were already over a few thousand accounts less than what they claimed in their report.”

IPOWER said the site hacks “came from a compromised server hosted by another company that was listed on the Stopbadware.org Web site. This impacted a higher percentage of accounts on each of these legacy third-party control panel systems.”

The company claims to have more than 700,000 customers. If we assume for the moment the small segment of IPOWER servers Security Fix analyzed is fairly representative of a larger trend, IPOWER may well be home to nearly a quarter-million malicious Web sites.

And an interesting point:

An Internet service provider or Web host can take action within 48 hours if it receives a “takedown notice,” under the Digital Millennium Copyright Act. The law protects network owners from copyright infringement liability, provided they take steps to promptly remove the infringing content. Yet ISPs and Web hosts often leave sites undisturbed for months that cooperate in stealing financial data and consumer identities.

There is no “notice and takedown” law specifically requiring ISPs and Web hosts to police their networks for sites that may serve malicious software.

Posted on May 25, 2007 at 7:13 AM23 Comments

Airport Screeners Catch Guy in Fake Uniform

This is a joke, right?

A TSA behavior detection team at a Florida airport helped catch a passenger allegedly impersonating a member of the military on May 10 as he went through the security checkpoint.

We spend billions on airport security, and we have so little to show for it that the TSA has to make a big deal about the crime of impersonating a member of the military?

Posted on May 23, 2007 at 12:38 PM77 Comments

GAO Report on International Passenger Prescreening

From the U.S. GAO: “Aviation Security: Efforts to Strengthen International Prescreening are Under Way, but Planning and Implementations Remain,” May 2007.

What GAO Found

Customs and Border Protection (CBP), the Department of Homeland Security (DHS) agency responsible for international passenger prescreening, has planned or is taking several actions designed to strengthen the aviation passenger prescreening process. One such effort involves CBP stationing U.S. personnel overseas to evaluate the authenticity of the travel documents of certain high-risk passengers prior to boarding U.S.-bound flights. Under this pilot program, called the Immigration Advisory Program (IAP), CBP officers personally interview some passengers deemed to be high-risk and evaluate the authenticity and completeness of these passengers’ travel documents. IAP officers also provide technical assistance and training to air carrier staff on the identification of improperly documented passengers destined for the United States. The IAP has been tested at several foreign airports and CBP is negotiating with other countries to expand it elsewhere and to make certain IAP sites permanent. Successful implementation of the IAP rests, in part, on CBP clearly defining the goals and objectives of the program through the development of a strategic plan.

A second aviation passenger prescreening effort designed to strengthen the passenger prescreening process is intended to align international passenger prescreening with a similar program (currently under development) for prescreening passengers on domestic flights. The Transportation Security Administration (TSA)—a separate agency within DHS—is developing a domestic passenger prescreening program called Secure Flight. If CBP’s international prescreening program and TSA’s Secure Flight program are not effectively aligned once Secure Flight becomes operational, this could result in separate implementation requirements for air carriers and increased costs for both air carriers and the government. CBP and TSA officials stated that they are taking steps to coordinate their prescreening efforts, but they have not yet made all key policy decisions.

In addition to these efforts to strengthen certain international aviation passenger prescreening procedures, one other issue requires consideration in the context of these efforts. This issue involves DHS providing the traveling public with assurances of privacy protection as required by federal privacy law. Federal privacy law requires agencies to inform the public about how the government uses their personal information. Although CBP officials have stated that they have taken and are continuing to take steps to comply with these requirements, the current prescreening process allows passenger information to be used in multiple prescreening procedures and transferred among various CBP prescreening systems in ways that are not fully explained in CBP’s privacy disclosures. If CBP does not issue all appropriate disclosures, the traveling public will not be fully aware of how their personal information is being used during the passenger prescreening process.

Posted on May 23, 2007 at 7:18 AM33 Comments

Image Spam

Good article on image spam:

A year ago, fewer than five out of 100 e-mails were image spam, according to Doug Bowers of Symantec. Today, up to 40 percent are. Meanwhile, image spam is the reason spam traffic overall doubled in 2006, according to antispam company Borderware. It is expected to keep rising.

The conceit behind image spam is graceful in its simplicity: Computers can’t see.

Definitely look at the interactive graphics page.

Posted on May 22, 2007 at 6:46 AM52 Comments

On the Futility of Fighting Online Pirates

From Forbes:

Their argument is rooted, ironically, in the Digital Millenium Copyright Act that U.S. lawmakers approved in 1998. The Alluc.org kids, as well as the operators of most sites that let users upload content, argue that they’re not violating copyright law if they’re not the ones putting it up and if they take it down at the copyright holder’s request. It’s the same argument Google is making in its YouTube case.

But there are more practical reasons that sites like Alluc.org get away with what they’re doing. One is that there are simply too many of them to keep track of. Media companies’ lawyers rarely have time to police so many obscure sites, and even when they do, users can always upload the infringing files again. So the flow of copyrighted streaming video continues.

Not every scheme to evade intellectual property laws is so subtle. The music-selling site AllofMP3.com uses a simpler business model: Base your company in Russia, steal music from American labels and sell it cheaply. AllofMP3 allows users to download full albums for as little as $1 each—10% of what they would cost on iTunes. From June to October 2006 alone, the Recording Industry Association of America says that 11 million songs were downloaded from the site. AllofMP3 claims those sales adhered strictly to Russian law, but that doesn’t satisfy the RIAA; the record labels have launched a lawsuit, asking for $150,000 for each stolen file, totaling $1.65 trillion.

Slashdot thread.

Posted on May 21, 2007 at 1:36 PM44 Comments

307-Digit Number Factored

We have a new factoring record: 307 digits. It’s a special number—2^1039 – 1—but the techniques can be generalized:

Is the writing on the wall for 1024-bit encryption” “The answer to that question is an unqualified yes,” says Lenstra. For the moment the standard is still secure, because it is much more difficult to factor a number made up of two huge prime numbers, such as an RSA number, than it is to factor a number like this one that has a special mathematical form. But the clock is definitely ticking. “Last time, it took nine years for us to generalize from a special to a non-special hard-to factor number (155 digits). I won’t make predictions, but let’s just say it might be a good idea to stay tuned.”

I hope RSA applications would have moved away from 1024-bit security years ago, but for those who haven’t yet: wake up.

EDITED TO ADD (5/21): That’s 1023 bits. (I should have said that.)

Posted on May 21, 2007 at 10:26 AM53 Comments

London's Dirty Bomb Tests

London is running a dirty-bomb drill. Mostly a movie-plot threat, but these sorts of drills are useful, regardless of the scenario.

I agree with this:

As ever, plain old explosives are the big worry. As for chemicals, compare the effects of the Tokyo subway gas attack (10 terrorists, five attacks each involving 1kg of hard-to-get sarin nerve gas, 12 dead total) with a typical backpack-bomb attack (London 7/7: four terrorists, four simple home made devices, 52 dead). Only a stupid attacker would bother with chemicals. Real pros like the IRA, for instance, never have.

Although with a dirty bomb, the media-inspired panic would certainly be a huge factor.

Posted on May 21, 2007 at 6:34 AM27 Comments

Joke That'll Get You Arrested

Don’t say that I didn’t warn you:

If you are sitting next to someone who irritates you on a plane or train…

1. Quietly and calmly open up your laptop case.
2. Remove your laptop.
3. Boot it.
4. Make sure the person who won’t leave you alone can see the screen.
5. Open your email client to this message.
6. Close your eyes and tilt your head up to the sky.
7. Then hit this link: http://www.thecleverest.com/countdown.swf

If you try it, post what happened in comments.

Posted on May 19, 2007 at 10:16 AM148 Comments

Rare Risk and Overreactions

Everyone had a reaction to the horrific events of the Virginia Tech shootings. Some of those reactions were rational. Others were not.

A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the police when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters—a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.

These things all happened, even though shootings like this are incredibly rare; even though—for all the press—less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.

The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren’t very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There’s a lot of research in the psychological community about how the brain responds to risk—some of it I have already written about—but the gist is this: Our brains are much better at processing the simple risks we’ve had to deal with throughout most of our species’ existence, and much poorer at evaluating the complex risks society forces us to face today.

Novelty plus dread equals overreaction.

We can see the effects of this all the time. We fear being murdered, kidnapped, raped and assaulted by strangers, when it’s far more likely that the perpetrator of such offenses is a relative or a friend. We worry about airplane crashes and rampaging shooters instead of automobile crashes and domestic violence—both far more common.

In the United States, dogs, snakes, bees and pigs each kill more people per year (.pdf) than sharks. In fact, dogs kill more humans than any animal except for other humans. Sharks are more dangerous than dogs, yes, but we’re far more likely to encounter dogs than sharks.

Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota—where I live—in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: “There were no terrorist attacks in the two years preceding 9/11, and you didn’t have any policies. What does that prove?”

What it proves is that terrorist attacks are very rare, and maybe our reaction wasn’t worth the enormous expense, loss of liberty, attacks on our Constitution and damage to our credibility on the world stage. Still, overreacting was the natural thing for us to do. Yes, it’s security theater, but it makes us feel safer.

People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.

We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television. (Nassim Nicholas Taleb’s great book, The Black Swan: The Impact of the Highly Improbable, discusses this.)

Consider the reaction to another event from last month: professional baseball player Josh Hancock got drunk and died in a car crash. As a result, several baseball teams are banning alcohol in their clubhouses after games. Aside from this being a ridiculous reaction to an incredibly rare event (2,430 baseball games per season, 35 people per clubhouse, two clubhouses per game. And how often has this happened?), it makes no sense as a solution. Hancock didn’t get drunk in the clubhouse; he got drunk at a bar. But Major League Baseball needs to be seen as doing something, even if that something doesn’t make sense—even if that something actually increases risk by forcing players to drink at bars instead of at the clubhouse, where there’s more control over the practice.

I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news—car crashes, domestic violence—that you should start worrying.

But that’s not the way we think. Psychologist Scott Plous said it well in The Psychology of Judgment and Decision Making: “In very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear causal.”

So, when faced with a very available and highly vivid event like 9/11 or the Virginia Tech shootings, we overreact. And when faced with all the salient related events, we assume causality. We pass the Patriot Act. We think if we give guns out to students, or maybe make it harder for students to get guns, we’ll have solved the problem. We don’t let our children go to playgrounds unsupervised. We stay out of the ocean because we read about a shark attack somewhere.

It’s our brains again. We need to “do something,” even if that something doesn’t make sense; even if it is ineffective. And we need to do something directly related to the details of the actual event. So instead of implementing effective, but more general, security measures to reduce the risk of terrorism, we ban box cutters on airplanes. And we look back on the Virginia Tech massacre with 20-20 hindsight and recriminate ourselves about the things we should have done.

Lastly, our brains need to find someone or something to blame. (Jon Stewart has an excellent bit on the Virginia Tech scapegoat search, and media coverage in general.) But sometimes there is no scapegoat to be found; sometimes we did everything right, but just got unlucky. We simply can’t prevent a lone nutcase from shooting people at random; there’s no security measure that would work.

As circular as it sounds, rare events are rare primarily because they don’t occur very often, and not because of any preventive security measures. And implementing security measures to make these rare events even rarer is like the joke about the guy who stomps around his house to keep the elephants away.

“Elephants? There are no elephants in this neighborhood,” says a neighbor.

“See how well it works!”

If you want to do something that makes security sense, figure out what’s common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.

This essay originally appeared on Wired.com, my 42nd essay on that site.

EDITED TO ADD (6/5): Archiloque has translated this essay into French.

EDITED TO ADD (6/14): The British academic risk researcher Prof. John Adams wrote an insightful essay on this topic called “What Kills You Matters—Not Numbers.”

Posted on May 17, 2007 at 2:16 PM79 Comments

Dan Geer on Trade-Offs and Monoculture

In the April 2007 issue of Queue, Dan Geer writes about security trade-offs, monoculture, and genetic diversity in honeybees:

Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is means, not an end. This is as it should be. Since means are about tradeoffs, security is about tradeoffs, but you already knew that.

Posted on May 17, 2007 at 6:58 AM21 Comments

Mobile Phones Disabled When President Bush Visits Sydney

In an effort to prevent terrorism, parts of the mobile phone network will be disabled when President Bush visits Australia. I’ve written about this kind of thing before; it’s a perfect example of security theater: a countermeasure that works if you happen to guess the specific details of the plot correctly, and completely useless otherwise.

On the plus side, it’s only a small area that’s blocked:

It is expected mobile phone calls will drop out in an area the size of a football field as the helicopter passes overhead.

EDITED TO ADD (5/19): Slashdot thread.

EDITED TO ADD (5/20): The Register article.

Posted on May 16, 2007 at 1:55 PM54 Comments

Teaching Computers How to Forget

I’ve written about the death of ephemeral conversation, the rise of wholesale surveillance, and the electronic audit trail that now follows us through life. Viktor Mayer-Schönberger, a professor in Harvard’s JFK School of Government, has noticed this too, and believes that computers need to forget.

Why would we want our machines to “forget”? Mayer-Schönberger suggests that we are creating a Benthamist panopticon by archiving so many bits of knowledge for so long. The accumulated weight of stored Google searches, thousands of family photographs, millions of books, credit bureau information, air travel reservations, massive government databases, archived e-mail, etc., can actually be a detriment to speech and action, he argues.

“If whatever we do can be held against us years later, if all our impulsive comments are preserved, they can easily be combined into a composite picture of ourselves,” he writes in the paper. “Afraid how our words and actions may be perceived years later and taken out of context, the lack of forgetting may prompt us to speak less freely and openly.”

In other words, it threatens to make us all politicians.

In contrast to omnibus data protection legislation, Mayer-Schönberger proposes a combination of law and software to ensure that most data is “forgotten” by default. A law would decree that “those who create software that collects and stores data build into their code not only the ability to forget with time, but make such forgetting the default.” Essentially, this means that all collected data is tagged with a new piece of metadata that defines when the information should expire.

In practice, this would mean that iTunes could only store buying data for a limited time, a time defined by law. Should customers explicitly want this time extended, that would be fine, but people must be given a choice. Even data created by users—digital pictures, for example—would be tagged by the cameras that create them to expire in a year or two; pictures that people want to keep could simply be given a date 10,000 years in the future.

Frank Pasquale also comments on the legal implications implicit in this issue. And Paul Ohm wrote a note titled “The Fourth Amendment Right to Delete”:

For years the police have entered homes and offices, hauled away filing cabinets full of records, and searched them back at the police station for evidence. In Fourth Amendment terms, these actions are entry, seizure, and search, respectively, and usually require the police to obtain a warrant. Modern-day police can avoid some of these messy steps with the help of technology: They have tools that duplicate stored records and collect evidence of behavior, all from a distance and without the need for physical entry. These tools generate huge amounts of data that may be searched immediately or stored indefinitely for later analysis. Meanwhile, it is unclear whether the Fourth Amendment’s restrictions apply to these technologies: Are the acts of duplication and collection themselves seizure? Before the data are analyzed, has a search occurred?

EDITED TO ADD (6/14): Interesting presentation earlier this year by Dr. Radia Perlman that represents some work toward this problem. And a counterpoint.

Posted on May 16, 2007 at 6:19 AM34 Comments

Hinky at the Casino: JDLR

It’s called “Just Doesn’t Look Right“:

In the casino business, or any other, we tend to become complacent, and we stop paying attention to the little things. But a really sharp observer will still be shocked awake at some little unexplained thing: the five o’clock shadow on the woman sitting opposite the big-money player, or too many people watching that game, or the fellow who keeps looking directly at the cameras. The guy who looks as though he slept under an overpass carrying a new shopping bag from Nieman-Marcus, the two players on a table game whose arms were held against their chests, the bulge under that character’s jacket and the man wearing an overcoat on an August day in Las Vegas.

Posted on May 15, 2007 at 11:05 AM15 Comments

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AM51 Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PM31 Comments

Sex Toy Security Risk

This sounds like bullshit to me:

Small, egg-shaped and promising ‘divine’ vibrations, a UK sex toy has been deemed a threat to Cyprus’s national security. According to the company Ann Summers, the Love Bug 2 has been banned because the Cypriot military is concerned its electronic waves would disrupt the army’s radio frequencies. Operated by a remote control with a range of six metres, it is described by Ann Summers as ‘deceptively powerful’. The company said: “The Love Bug 2 is available in Cyprus but we have had to put a warning out urging Cypriots not to use it.”

Posted on May 11, 2007 at 12:19 PM29 Comments

Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in 1984 was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

1984‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

1984‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

1984-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

1984-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of 1984 was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie Brazil and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society.We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and—yes—even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 11, 2007 at 9:19 AM47 Comments

Quantum Computation Research Center in Singapore

Singapore is setting up a $98M research center for quantum computation.

Great news, but what in the world does this quote mean?

Professor Artur Ekert, Director, Research Centre of Excellence, said: “At the moment, you can buy quantum cryptography systems, you can use it in some simple applications but somehow you have to trust companies that sell it to you or you have to test the equipment.

“The kind of quantum cryptography we develop here is probably the most sophisticated that is not available in any other countries so we have some ideas to make it so secure that you don’t even have to trust equipment that you could buy from a vendor.”

Posted on May 10, 2007 at 1:08 PM29 Comments

1933 Anti-Spam Doorbell

Here’s a great description of an anti-spam doorbell from 1933. A visitor had to deposit a dime into a slot to make the doorbell ring. If the homeowner appreciated the visit, he would return the dime. Otherwise, the dime became the cost of disturbing the homeowner.

This kind of system has been proposed for e-mail as well: the sender has to pay the receiver—or someone else in the system—a nominal amount for each e-mail sent. This money is returned if the e-mail is wanted, and forfeited if it is spam. The result would be to raise the cost of sending spam to the point where it is uneconomical.

I think it’s worth comparing the two systems—the doorbell system and the e-mail system—to demonstrate why it won’t work for spam.

The doorbell system fails for three reasons: the percentage of annoying visitors is small enough to make the system largely unnecessary, visitors don’t generally have dimes on them (presumably fixable if the system becomes ubiquitous), and it’s too easy to successfully bypass the system by knocking (not true for an apartment building).

The anti-spam system doesn’t suffer from the first two problems: spam is an enormous percentage of total e-mail, and an automated accounting system makes the financial mechanics easy. But the anti-spam system is too easy to bypass, and it’s too easy to hack. And once you set up a financial system, you’re simply inviting hacks.

The anti-spam system fails because spammers don’t have to send e-mail directly—they can take over innocent computers and send it from them. So it’s the people whose computers have been hacked into, victims in their own right, who will end up paying for spam. This risk can be limited by letting people put an upper limit on the money in their accounts, but it is still serious.

And criminals can exploit the system in the other direction, too. They could hack into innocent computers and have them send “spam” to their email addresses, collecting money in the process.

Trying to impose some sort of economic penalty on unwanted e-mail is a good idea, but it won’t work unless the endpoints are trusted. And we’re nowhere near that trust today.

Posted on May 10, 2007 at 5:57 AM56 Comments

Poppy Coins Are not Radio Transmitters

Remember the weird story about radio transmitters found in Canadian coins in order to spy on Americans?

Complete nonsense:

The worried contractors described the coins as “anomalous” and “filled with something man-made that looked like nanotechnology,” according to once-classified U.S. government reports and e-mails obtained by the AP.

The silver-colored 25-cent piece features the red image of a poppy—Canada’s flower of remembrance—inlaid over a maple leaf. The unorthodox quarter is identical to the coins pictured and described as suspicious in the contractors’ accounts.

The supposed nanotechnology actually was a conventional protective coating the Royal Canadian Mint applied to prevent the poppy’s red color from rubbing off. The mint produced nearly 30 million such quarters in 2004 commemorating Canada’s 117,000 war dead.

“It did not appear to be electronic [analog] in nature or have a power source,” wrote one U.S. contractor, who discovered the coin in the cup holder of a rental car. “Under high power microscope, it appeared to be complex consisting of several layers of clear, but different material, with a wire-like mesh suspended on top.”

The confidential accounts led to a sensational warning from the Defense Security Service, an agency of the Defense Department, that mysterious coins with radio frequency transmitters were found planted on U.S. contractors with classified security clearances on at least three separate occasions between October 2005 and January 2006 as the contractors traveled through Canada.

One contractor believed someone had placed two of the quarters in an outer coat pocket after the contractor had emptied the pocket hours earlier. “Coat pockets were empty that morning and I was keeping all of my coins in a plastic bag in my inner coat pocket,” the contractor wrote.

Posted on May 9, 2007 at 11:28 AM37 Comments

Low-Tech Air Force Grounds High-Tech Air Force

Good story:

SRI Lanka’s powerful air force has been grounded by single-engined, propeller-driven aircraft adapted by Tamil Tiger guerillas to carry bombs under their wings.
The “Flying Tigers”—the tiny air wing of the brutal LTTE insurgents fighting for a separate Tamil state—are proving more than a match for Sri Lanka’s well-equipped air force.

After a second night raid on the capital, Colombo, it is clear to South Asian military analysts that the world’s only guerilla movement with an air-strike capacity has been able to attack virtually unchallenged by the conventional air force.

Flying hundreds of kilometres from secret jungle airstrips, the Flying Tigers, in what are believed to be adapted Zlin Z-142 aircraft of Czech design, have been untroubled other than by ground fire as they have successively raided the country’s biggest military base, next to the international airport, and oil and gas installations on the fringes of the city.

After each attack, they have returned to their bases, outwitting the Sri Lankan air force, which has a fleet of more than 100 aircraft.

Even sophisticated radar and air defence systems have done little more than warn of impending attacks and allow time for anti-aircraft batteries to open fire into the night sky, aiming at targets they cannot see.

The air force’s Israeli Kfirs, Russian Mig-27s and Y-8 bombers have remained grounded, along with its force of MI-17 and MI-24 helicopter gunships.

Posted on May 9, 2007 at 6:09 AM40 Comments

REAL ID Action Required Now

I’ve written about the U.S. national ID card—REAL ID—extensively (most recently here). The Department of Homeland Security has published draft rules regarding REAL ID, and are requesting comments. Comments are due today, by 5:00 PM Eastern Time. Please, please, please, go to this Privacy Coalition site and submit your comments. The DHS has been making a big deal about the fact that so few people are commenting, and we need to prove them wrong.

This morning the Senate Judiciary Committee held hearings on REAL ID (info—and eventually a video—here); I was one of the witnesses who testified.

And lastly, Richard Forno and I wrote this essay for News.com:

In March, the Department of Homeland Security released its long-awaited guidance document regarding national implementation of the Real ID program, as part of its post-9/11 national security initiatives. It is perhaps quite telling that despite bipartisan opposition, Real ID was buried in a 2005 “must-pass” military spending bill and enacted into law without public debate or congressional hearings.

DHS has maintained that the Real ID concept is not a national identification database. While it’s true that the system is not a single database per se, this is a semantic dodge; according to the DHS document, Real ID will be a collaborative data-interchange environment built from a series of interlinking systems operated and administered by the states. In other words, to the Department of Homeland Security, it’s not a single database because it’s not a single system. But the functionality of a single database remains intact under the guise of a federated data-interchange environment.

The DHS document notes the “primary benefit of Real ID is to improve the security and lessen the vulnerability of federal buildings, nuclear facilities, and aircraft to terrorist attack.” We know now that vulnerable cockpit doors were the primary security weakness contributing to 9/11, and reinforcing them was a long-overdue protective measure to prevent hijackings. But this still raises an interesting question: Are there really so many members of the American public just “dropping by” to visit a nuclear facility that it’s become a primary reason for creating a national identification system? Are such visitors actually admitted?

DHS proposes guidelines for proving one’s identity and residence when applying for a Real ID card. Yet while the department concedes it’s a monumental task to prove one’s domicile or residence, it leaves it up to the states to determine what documents would be adequate proof of residence—and even suggests that a utility bill or bank statement might be appropriate documentation. If so, a person could easily generate multiple proof-of-residence documents. Basing Real ID on such easy-to-forge documents obviates a large portion of what Real ID is supposed to accomplish.

Finally, and perhaps most importantly for Americans, the very last paragraph of the 160-page Real ID document deserves special attention. In a nod to states’ rights advocates, DHS declares that states are free not to participate in the Real ID system if they choose—but any identification card issued by a state that does not meet Real ID criteria is to be clearly labeled as such, to include “bold lettering” or a “unique design” similar to how many states design driver’s licenses for those under 21 years of age.

In its own guidance document, the department has proposed branding citizens not possessing a Real ID card in a manner that lets all who see their official state-issued identification know that they’re “different,” and perhaps potentially dangerous, according to standards established by the federal government. They would become stigmatized, branded, marked, ostracized, segregated. All in the name of protecting the homeland; no wonder this provision appears at the very end of the document.

One likely outcome of this DHS-proposed social segregation is that people presenting non-Real ID identification automatically will be presumed suspicious and perhaps subject to additional screening or surveillance to confirm their innocence at a bar, office building, airport or routine traffic stop. Such a situation would establish a new form of social segregation—an attempt to separate “us” from “them” in the age of counterterrorism and the new normal, where one is presumed suspicious until proven more suspicious.

Two other big-picture concerns about Real ID come to mind: Looking at the overall concept of a national identification database, and given existing data security controls in large distributed systems, one wonders how vulnerable this system-of-systems will be to data loss or identity theft resulting from unscrupulous employees, flawed technologies, external compromises or human error—even under the best of security conditions. And second, there is no clear guidance on the limits of how the Real ID database would be used. Other homeland security initiatives, such as the Patriot Act, have been used and applied—some say abused—for purposes far removed from anything related to homeland security. How can we ensure the same will not happen with Real ID?

As currently proposed, Real ID will fail for several reasons. From a technical and implementation perspective, there are serious questions about its operational abilities both to protect citizen information and resist attempts at circumvention by adversaries. Financially, the initial unfunded $11 billion cost, forced onto the states by the federal government, is excessive. And from a sociological perspective, Real ID will increase the potential for expanded personal surveillance and lay the foundation for a new form of class segregation in the name of protecting the homeland.

It’s time to rethink some of the security decisions made during the emotional aftermath of 9/11 and determine whether they’re still a good idea for homeland security and America. After all, if Real ID was such a well-conceived plan, Maine and 22 other states wouldn’t be challenging it in their legislatures or rejecting the Real ID concept for any number of reasons. But they are.

And we as citizens should, too. Let the debate begin.

Again, go to this Privacy Coalition site and express your views. Today. Before 5:00 PM Eastern Time. (Or, if you prefer, you can use EFF’s comments page.)

Really. It will make a difference.

EDITED TO ADD (5/8): Status of anti-REAL-ID legislation in the states.

EDITED TO ADD (5/9): Article on the hearing.

Posted on May 8, 2007 at 12:15 PM57 Comments

The Myth of the Superuser

This is a very interesting law journal paper:

The Myth of the Superuser: Fear, Risk, and Harm Online

Paul Ohm

Abstract: Fear of the powerful computer user, “the Superuser,” dominates debates about online conflict. This mythic figure is difficult to find, immune to technological constraints, and aware of legal loopholes. Policymakers, fearful of his power, too often overreact, passing overbroad, ambiguous laws intended to ensnare the Superuser, but which are used instead against inculpable, ordinary users. This response is unwarranted because the Superuser is often a marginal figure whose power has been greatly exaggerated.

The exaggerated attention to the Superuser reveals a pathological characteristic of the study of power, crime, and security online, which springs from a widely-held fear of the Internet. Building on the social science fear literature, this Article challenges the conventional wisdom and standard assumptions about the role of experts. Unlike dispassionate experts in other fields, computer experts are as susceptible as lay-people to exaggerate the power of the Superuser, in part because they have misapplied Larry Lessig’s ideas about code.

The experts in computer security and Internet law have failed to deliver us from fear, resulting in overbroad prohibitions, harms to civil liberties, wasted law enforcement resources, and misallocated economic investment. This Article urges policymakers and partisans to stop using tropes of fear; calls for better empirical work on the probability of online harm; and proposes an anti-Precautionary Principle, a presumption against new laws designed to stop the Superuser.

If I have one complaint, it’s that Ohm doesn’t take into account the effects of the smarter hackers to encapsulate their expertise in easy-to-run software programs, and distribute them to those without the skill. He does mention this at the end, in a section about script kiddies, but I think this is a fundamental difference between hacking skills and other potentially criminal skills.

Here’s a threepart summary of the topic by Ohm.

Posted on May 8, 2007 at 6:14 AM32 Comments

Weird Lottery Hack

This is a weird story:

On January 4, 2005 Dr Lee and Ms Day presented their Lotto ticket at the World Square Newsagency Bookshop. A friend took their photo with the ticket before they handed it in and filled in a claim form.

After the transaction, the employee who had served them, Chrishartato Ongkoputra, known as Chris Ong, substituted their claim form for one of his own. He then sent his form, and their winning ticket, to NSW Lotteries.

“The stars really aligned for him,” said the barrister James Stevenson, SC, who is representing newsagents Michael Pavellis and his partner Sheila Urech-Tan.

Mr Ong knew that NSW Lotteries would not pay out for 14 days. He told his boss he was having visa problems and needed to return temporarily to Indonesia. He gambled that the backpackers would not chase up their win until after he had left the country.

Gutsy.

Posted on May 7, 2007 at 11:07 AM14 Comments

Stink Bombs As Terrorist Tools

Two teenage boys detonated a stink bomb on a Sydney commuter train, and prompted a counter-terrorism response.

Best quote:

“It would have been terrifying. You’re on a train, you hear a loud bang, the logical conclusion that people drew was (that it was) probably a terrorist attack,” Mr Owens told reporters.

I agree that it was the conclusion that people drew, but not that it was a logical conclusion.

Posted on May 7, 2007 at 7:15 AM43 Comments

U.S./Canadian Dispute over Border Crossing Procedures

Interesting:

The main sticking point was Homeland’s unwillingness to accept Canada’s legal problem with having U.S. authorities take fingerprints of people who approach the border but decide not to cross.

Canadian law doesn’t permit fingerprinting unless someone volunteers or has been charged with a crime.

Canada’s assurances that it would co-operate in investigating any suspicious person who approaches the border weren’t enough, said one Capitol Hill source.

“The Attorney General’s office really just wants to grab as much biometric information as it can,” said the source.

Posted on May 6, 2007 at 12:35 PM71 Comments

New Trojan Mimics Windows Activation Interface

Clever:

What they are calling Trojan.Kardphisher doesn’t do most of the technical things that Trojan horses usually do; it’s a pure social engineering attack, aimed at stealing credit card information. In a sense, it’s a standalone phishing program.

Once you reboot your PC after running the program, the program asks you to activate your copy of Windows and, while it assures you that you will not be charged, it asks for credit card information. If you don’t enter the credit card information it shuts down the PC. The Trojan also disables Task Manager, making it more difficult to shut down..

Running on the first reboot is clever. It inherently makes the process look more like it’s coming from Windows itself, and it removes the temporal connection to running the Trojan horse. The program even runs on versions of Windows prior to XP, which did not require activation.

More info here.

Posted on May 5, 2007 at 7:59 AM16 Comments

UK Police Blow Up Bat Detector

Boston-style idiocy from the UK:

Officers were called to Handcross at noon yesterday after a member of the public spotted the box under a bridge over the A23.

Police immediately set-up a no-go zone around the site and offered 20 residents shelter in the parish hall while the bomb disposal unit investigated.

Both lanes of the A23 at Pease Pottage, near the motorway junction, and the A272 at Bolney were closed for several hours.

The Horsham Road at Handcross was also shut and traffic diversions set up.

Drivers were advised to avoid the area because of traffic gridlock.

The £1,000 bat detector, which monitors the nocturnal creature’s calls, was put under the bridge as part of a survey of the endangered creatures.

For those who don’t know, the A23 is the main road between London and Brighton on the south coast. More info on the incident here and here.

I like this comment:

We are working on ways to improve identification of our property to avoid a repeat of the incident.

Might I suggest a sign: “This is not a bomb.”

Refuse to be terrorized, people!

Posted on May 4, 2007 at 1:23 PM38 Comments

Do We Really Need a Security Industry?

Last week I attended the Infosecurity Europe conference in London. Like at the RSA Conference in February, the show floor was chockablock full of network, computer and information security companies. As I often do, I mused about what it means for the IT industry that there are thousands of dedicated security products on the market: some good, more lousy, many difficult even to describe. Why aren’t IT products and services naturally secure, and what would it mean for the industry if they were?

I mentioned this in an interview with Silicon.com, and the published article seems to have caused a bit of a stir. Rather than letting people wonder what I really meant, I thought I should explain.

The primary reason the IT security industry exists is because IT products and services aren’t naturally secure. If computers were already secure against viruses, there wouldn’t be any need for antivirus products. If bad network traffic couldn’t be used to attack computers, no one would bother buying a firewall. If there were no more buffer overflows, no one would have to buy products to protect against their effects. If the IT products we purchased were secure out of the box, we wouldn’t have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn’t help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity—companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later. Their profits would rise in step with the overall level of security on the internet. Initially we’d still be spending a comparable amount of money per year on security—on secure development practices, on embedded security and so on—but some of that money would be going into improving the quality of the IT products we’re buying, and would reduce the amount we spend on security in future years.

I know this is a utopian vision that I probably won’t see in my lifetime, but the IT services market is pushing us in this direction. As IT becomes more of a utility, users are going to buy a whole lot more services than products. And by nature, services are more about results than technologies. Service customers—whether home users or multinational corporations—care less and less about the specifics of security technologies, and increasingly expect their IT to be integrally secure.

Eight years ago, I formed Counterpane Internet Security on the premise that end users (big corporate users, in this case) really don’t want to have to deal with network security. They want to fly airplanes, produce pharmaceuticals or do whatever their core business is. They don’t want to hire the expertise to monitor their network security, and will gladly farm it out to a company that can do it for them. We provided an array of services that took day-to-day security out of the hands of our customers: security monitoring, security-device management, incident response. Security was something our customers purchased, but they purchased results, not details.

Last year BT bought Counterpane, further embedding network security services into the IT infrastructure. BT has customers that don’t want to deal with network management at all; they just want it to work. They want the internet to be like the phone network, or the power grid, or the water system; they want it to be a utility. For these customers, security isn’t even something they purchase: It’s one small part of a larger IT services deal. It’s the same reason IBM bought ISS: to be able to have a more integrated solution to sell to customers.

This is where the IT industry is headed, and when it gets there, there’ll be no point in user conferences like Infosec and RSA. They won’t go away; they’ll simply become industry conferences. If you want to measure progress, look at the demographics of these conferences. A shift toward infrastructure-geared attendees is a measure of success.

Of course, security products won’t disappear—at least, not in my lifetime. There’ll still be firewalls, antivirus software and everything else. There’ll still be startup companies developing clever and innovative security technologies. But the end user won’t care about them. They’ll be embedded within the services sold by large IT outsourcing companies like BT, EDS and IBM, or ISPs like EarthLink and Comcast. Or they’ll be a check-box item somewhere in the core switch.

IT security is getting harder—increasing complexity is largely to blame—and the need for aftermarket security products isn’t disappearing anytime soon. But there’s no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it’s helpful in spotting SQL injection attacks. The whole IT security industry is an accident—an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work—and the details of how it works won’t matter.

This was my 41st essay for Wired.com.

EDITED TO ADD (5/3): Commentary.

EDITED TO ADD (5/4): More commentary.

EDITED TO ADD (5/10): More commentary.

Posted on May 3, 2007 at 10:09 AM45 Comments

Security Arms Races in Duck Oviducts and Phalluses

Interesting research at Yale:

Dr. Brennan argues that elaborate female duck anatomy evolves as a countermeasure against aggressive males. “Once they choose a male, they’re making the best possible choice, and that’s the male they want siring their offspring,” she said. “They don’t want the guy flying in from who knows where. It makes sense that they would develop a defense.”

Female ducks seem to be equipped to block the sperm of unwanted males. Their lower oviduct is spiraled like the male phallus, for example, but it turns in the opposite direction. Dr. Brennan suspects that the female ducks can force sperm into one of the pockets and then expel it. “It only makes sense as a barrier,” she said.

To support her argument, Dr. Brennan notes studies on some species that have found that forced matings make up about a third of all matings. Yet only 3 percent of the offspring are the result of forced matings. “To me, it means these females are successful with this strategy,” she said.

Dr. Brennan suspects that when the females of a species evolved better defenses, they drove the evolution of male phalluses. “The males have to step up to produce a longer or more flexible phallus,” she said.

Posted on May 3, 2007 at 7:45 AM24 Comments

Tampon Taser

Here’s a taser disguised as a tampon:

The tampon taser/stun gun is the latest in portable and personal security systems. The beauty of this taser/stun gun, aptly named The Pink Stinger, is its ingenious design and ability to be concealed nicely and unassumingly into any purse for ultimate stealth. The taser’s gentle glide zapplicator easily fits in the palm of your hand for incredible comfort and protection and ready for honorable discharge at a moments notice. In addition, its fresh floral scent helps eliminate the smell of fear, not just cover it up.

Important disclaimers:

This product strictly for use in accordance with country or state laws. Need not be female or menstruating to use effectively. Tampon taser/stun gun to be used for security purposes only or in self defense. It is not intended nor recommended for vaginal insertion.

Posted on May 2, 2007 at 4:05 PM39 Comments

Wiretapping in Italy

Encrypted phones are big business in Italy as a defense against wiretapping:

What has spurred encryption sales is not so much the legal wiretapping authorized by Italian magistrates—though information about those calls is also frequently leaked to the press—but the widespread availability of wiretapping technology over the Internet, which has created a growing pool of amateur eavesdroppers. Those snoops have a ready market in the Italian media for filched celebrity conversations.

Posted on May 2, 2007 at 1:02 PM15 Comments

Lawsuit for Not Disclosing a Security Breach

There’s a class-action lawsuit against TJX by various banks and banking groups:

The suit will argue that TJX failed to protect customer data with adequate security measures, and that the Framingham, Mass.-based retail giant was less than honest about how it handled data.

This case could break new legal ground, and is worth watching closely. (I’m rooting for the plaintiff.)

Posted on May 1, 2007 at 1:53 PM28 Comments

Google Ad Hack

Clever:

…the bad guys behind the attack appeared to capitalize on an odd feature of Google’s sponsored links. Normally, when a viewer hovers over a hyperlink, the name of the site that the computer user is about to access appears in the bottom left corner of the browser window. But hovering over Google’s sponsored links shows nothing in that area. That blank space potentially gives bad guys another way to hide where visitors will be taken first.

Posted on May 1, 2007 at 7:25 AM28 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.