Entries Tagged "business of security"

Page 3 of 5

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

Melissa Hathaway Interview

President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation’s cybersecurity policies.

Who is she?

Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI’s implementation.

Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:

In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation’s infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation’s broader needs, so we’ll need a new information-sharing environment. Private-sector risk models aren’t congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.

Increasingly, we see industry jointly addressing these vulnerabilities, such as with the Industry Consortium for Advancement of Security on the Internet to share common vulnerabilities and response mechanisms. In addition, there’s the Software Assurance Forum for Excellence in Code, an alliance of vendors who seek to improve software security. Industry is beginning to understand that [it has a] shared risk and shared responsibilities and sees the advantage of coordinating and collaborating up front during the development stage, so that we can start to address vulnerabilities from day one. We also need to look for niche partnerships to enhance product development and build trust into components. We need to understand when and how we introduce risk into the system and ask ourselves whether that risk is something we can live with.

The government is using its purchasing power to influence the market toward better security. We’re already seeing results with the Federal Desktop Core Configuration [FDCC] initiative, a mandated security configuration for federal computers set by the OMB. The Department of Commerce is working with several IT vendors on standardizing security settings for a wide variety of IT products and environments. Because a broad population of the government is using Windows XP and Vista, the FDCC imitative worked with Microsoft and others to determine security needs up front.

Posted on February 24, 2009 at 12:36 PMView Comments

Quantum Cryptography

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper—period.

This month we’ve seen reports on a new working quantum-key distribution network in Vienna, and a new quantum-key distribution technique out of Britain. Great stuff, but headlines like the BBC’s “‘Unbreakable’ encryption unveiled” are a bit much.

The basic science behind quantum crypto was developed, and prototypes built, in the early 1980s by Charles Bennett and Giles Brassard, and there have been steady advances in engineering since then. I describe basically how it all works in Applied Cryptography, 2nd Edition (pages 554-557). At least one company already sells quantum-key distribution products.

Note that this is totally separate from quantum computing, which also has implications for cryptography. Several groups are working on designing and building a quantum computer, which is fundamentally different from a classical computer. If one were built—and we’re talking science fiction here—then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it’s not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical. I think the best quantum computer today can factor the number 15.

While I like the science of quantum cryptography—my undergraduate degree was in physics—I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.

As I’ve often said, it’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be 50 feet tall or 100 feet tall, because either way, the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: The keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.

I’m always in favor of security research, and I have enjoyed following the developments in quantum cryptography. But as a product, it has no future. It’s not that quantum cryptography might be insecure; it’s that cryptography is already sufficiently secure.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/21): It’s amazing; even reporters responding to my essay get it completely wrong:

Keith Harrison, a cryptographer with HP Laboratories, is quoted by the Telegraph as saying that, as quantum computing becomes commonplace, hackers will use the technology to crack conventional encryption.

“We have to be thinking about solutions to the problems that quantum computing will pose,” he told the Telegraph. “The average consumer is going to want to know their own transactions and daily business is secure.

“One way of doing this is to use a one time pad essentially lists of random numbers where one copy of the numbers is held by the person sending the information and an identical copy is held by the person receiving the information. These are completely unbreakable when used properly,” he explained.

The critical feature of quantum computing is the unique fact that, if someone tampers with an information feed between two parties, then the nature of the quantum feed changes.

This makes eavesdropping impossible.

No, it wouldn’t make eavesdropping impossible. It would make eavesdropping on the communications channel impossible unless someone made an implementation error. (In the 80s, the NSA broke Soviet one-time-pad systems because the Soviets reused the pad.) Eavesdropping via spyware or Trojan or TEMPEST would still be possible.

EDITED TO ADD (10/26): Here’s another commenter who gets it wrong:

Now let me get this straight: I have no doubt that there are many greater worries in security than “mathematical crypography.” But does this justify totally ignoring the possibility that a cryptographic system might possibly be breakable? I mean maybe I’m influenced by this in the fact that I’ve been sitting in on a cryptanalysis course and I just met a graduate student who broke a cryptographic pseudorandom number generator, but really what kind of an argument is this? “Um, well, sometimes our cryptographic systems have been broken, but that’s nothing to worry about, because, you know, everything is kosher with the systems we are using.”

The point isn’t to ignore the possibility that a cryptographic system might possibly be broken; the point is to pay attention to the other parts of the system that are much much more likely to be already broken. Security is a chain; it’s only as secure as the weakest link. The cryptographic systems, as potentially flawed as they are, are the strongest link in the chain. We’d get a lot more security devoting our resources to making all those weaker links more secure.

Again, this is not to say that quantum cryptography isn’t incredibly cool research. It is, and I hope it continues to receive all sorts of funding. But for an operational network that is worried about security: you’ve got much bigger worries than whether Diffie-Hellman will be broken someday.

Posted on October 21, 2008 at 6:48 AMView Comments

Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It’s become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It’s a good idea in theory, but it’s mostly bunk in practice.

Before I get into the details, there’s one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It’s an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn’t make sense in this context.

But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercises knows, when you’re trying to make your numbers, cutting costs is the same as increasing revenues. So while security can’t produce ROI, loss prevention most certainly affects a company’s bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth. Conversely, it shouldn’t ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you’re wasting money. Spend less than that, and you’re also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it’s worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn’t.

The Data Imperative

The key to making this work is good data; the term of art is “actuarial tail.” If you’re doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you’re having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won’t get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn’t enough good data. There aren’t good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don’t even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we’re trying to prevent change so quickly that we can’t accumulate data fast enough. By the time we get some data, there’s a new threat model for which we don’t have enough data. So we can’t create ALE models.

But there’s another problem, and it’s that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can’t argue, since we’re just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you’re in charge of terrorism mitigation at a chlorine plant. What’s the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I’m making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They’ve jiggered the numbers so that they do.

This doesn’t mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don’t even show the vendor your improvements; it won’t consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you’re deciding what security products and services to buy.

This essay previously appeared in CSO Magazine.

Posted on September 2, 2008 at 6:05 AMView Comments

How to Sell Security

It’s a truism in sales that it’s easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It’s not they don’t ever buy these things, but it’s an uphill struggle.

The reason is psychological. And it’s the same dynamic when it’s a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company’s employees.

It’s also true that the better you understand your buyer, the better you can sell.

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.

But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600″—yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It’s a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one’s network. Of course there’s a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won’t happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don’t understand why, and are continually trying to frame their products in positive results. That’s why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they’re willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they’ve been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they’re not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn’t be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

This essay originally appeared in CIO.

Posted on May 26, 2008 at 5:57 AMView Comments

Third Annual Movie-Plot Threat Contest Winner

On April 7—seven days late—I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

On May 7, I posted five semi-finalists out of the 327 blog comments:

Sadly, two of those five was above the 150-word limit. Out of the three remaining, I (with the help of my readers) have chosen a winner.

Presenting, the winner of the Third Annual Movie Plot Threat Contest, Aaron Massey:

Tommy Tester Toothpaste Strips:

Many Americans were shocked to hear the results of the research trials regarding heavy metals and toothpaste conducted by the New England Journal of Medicine, which FDA is only now attempting to confirm. This latest scare comes after hundreds of deaths were linked to toothpaste contaminated with diethylene glycol, a potentially dangerous chemical used in antifreeze.

In light of this continuing health risk, Hamilton Health Labs is proud to announce Tommy Tester Toothpaste Strips! Just apply a dab of toothpaste from a fresh tube onto the strip and let it rest for 3 minutes. It’s just that easy! If the strip turns blue, rest assured that your entire tube of toothpaste is safe. However, if the strip turns pink, dispose of the toothpaste immediately and call the FDA health emergency number at 301-443-1240.

Do not let your family become a statistic when the solution is only $2.95!

Aaron wins, well, nothing really, except the fame and glory afforded by this blog. So give him some fame and glory. Congratulations.

Posted on May 15, 2008 at 6:24 AMView Comments

Third Annual Movie-Plot Threat Contest Semi-Finalists

A month ago I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

Submissions are in. The blog entry has 327 comments. I’ve read them all, and here are the semi-finalists:

It’s not in the running, but reader “False Data” deserves special mention for his Safe-T-Nav, a GPS system that detects high crime zones. It would be a semi-finalist, but it already exists.

Cast your vote; I’ll announce the winner on the 15th.

Posted on May 7, 2008 at 2:33 PMView Comments

The RSA Conference

Last week was the RSA Conference, easily the largest information security conference in the world. Over 17,000 people descended on San Francisco’s Moscone Center to hear some of the over 250 talks, attend I-didn’t-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.

Talk to the exhibitors, though, and the most common complaint is that the attendees aren’t buying.

It’s not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees’ companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can’t understand what the products do or why they should buy them. So they don’t.

I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company’s first customers, and the company was proud to parade him in front of the press. I asked him if he walked through the show floor, looking at the company’s competitors to see if there was any benefit to switching.

“I can’t figure out what any of those companies do,” he replied.

I believe him. The booths are filled with broad product claims, meaningless security platitudes, and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.

Commerce requires a meeting of minds between buyer and seller, and it’s just not happening. The sellers can’t explain what they’re selling to the buyers, and the buyers don’t buy because they don’t understand what the sellers are selling. There’s a mismatch between the two; they’re so far apart that they’re barely speaking the same language.

This is a bad thing in the near term—some good companies will go bankrupt and some good security technologies won’t get deployed—but it’s a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful—database management systems, Web 2.0 collaboration tools, a company-wide network—and they want it to be secure. They don’t want to have to become IT security experts. They don’t want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing—not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?

Imagine if the inventor of antilock brakes—or any automobile safety or security feature—had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn’t. But that’s not what happens. Antilock brakes, airbags, and that annoying sensor that beeps when you’re backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn’t mean that automobile safety isn’t important, and often these new features are touted by the car manufacturers.

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products, and new start-ups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/1): Commentary.

Posted on April 22, 2008 at 6:35 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.