Entries Tagged "economics of security"

Page 22 of 39

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AMView Comments

Terrorism as a Tax

Definitely a good way to look at it:

Fear, in other words, is a tax, and al-Qaeda and its ilk have done better at extracting it from Americans than the Internal Revenue Service. Think about the extra half-hour millions of airline passengers waste standing in security lines; the annual cost in lost work hours runs into the billions. Add to that the freight delays at borders, ports and airports, the cost of checking money transfers as well as goods in transit, the wages for beefed-up security forces around the world. And that doesn’t even attempt to put a price tag on the compression of civil liberties or the loss of human dignity from being groped in full public view by Transportation Security Administration personnel at the airport or from having to walk barefoot through the metal detector, holding up your beltless pants. This global transaction tax represents the most significant victory of Terror International to date.

The new fear tax falls most heavily on the United States. Last November, the Commerce Department reported a 17 percent decline in overseas travel to the United States between Sept. 11, 2001, and 2006. (There are no firm figures for 2007 yet, but there seems to have been an uptick.) That slump has cost the country $94 billion in lost tourist spending, nearly 200,000 jobs and $16 billion in forgone tax revenue—and all while the dollar has kept dropping.

Why? The journal Tourism Economics gives the predictable answer: “The perception that U.S. visa and entry policies do not welcome international visitors is the largest factor in the decline of overseas travelers.” Two-thirds of survey respondents worried about being detained for hours because of a misstatement to immigration officials. And here is the ultimate irony: “More respondents were worried about U.S. immigration officials (70 percent) than about crime or terrorism (54 percent) when considering a trip to the country.”

In Beyond Fear I wrote:

Security is a tax on the honest.

If it weren’t for attackers, our lives would be a whole lot easier. In a world where everyone was completely honorable and law-abiding all of the time, everything we bought and did would be cheaper. We wouldn’t have to pay for door locks, police departments, or militaries. There would be no security countermeasures, because people would never consider going where they were not allowed to go or doing what they were not allowed to do. Fraud would not be a problem, because no one would commit fraud. Nor would anyone commit burglary, murder, or terrorism. We wouldn’t have to modify our behavior based on security risks, because there would be none.

But that’s not the world we live in. Security permeates everything we do and supports our society in innumerable ways. It’s there when we wake up in the morning, when we eat our meals, when we’re at work, and when we’re with our families. It’s embedded in our wallets and the global financial network, in the doors of our homes and the border crossings of our countries, in our conversations and the publications we read. We constantly make security trade-offs, whether we’re conscious of them or not: large and small, personal and social. Many more security trade-offs are imposed on us from outside: by governments, by the marketplace, by technology, and by social norms. Security is a part of our world, just as it is part of the world of every other living thing. It has always been a part, and it always will be.

Posted on May 12, 2008 at 6:29 AMView Comments

Third Annual Movie-Plot Threat Contest Semi-Finalists

A month ago I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

Submissions are in. The blog entry has 327 comments. I’ve read them all, and here are the semi-finalists:

It’s not in the running, but reader “False Data” deserves special mention for his Safe-T-Nav, a GPS system that detects high crime zones. It would be a semi-finalist, but it already exists.

Cast your vote; I’ll announce the winner on the 15th.

Posted on May 7, 2008 at 2:33 PMView Comments

The RSA Conference

Last week was the RSA Conference, easily the largest information security conference in the world. Over 17,000 people descended on San Francisco’s Moscone Center to hear some of the over 250 talks, attend I-didn’t-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.

Talk to the exhibitors, though, and the most common complaint is that the attendees aren’t buying.

It’s not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees’ companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can’t understand what the products do or why they should buy them. So they don’t.

I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company’s first customers, and the company was proud to parade him in front of the press. I asked him if he walked through the show floor, looking at the company’s competitors to see if there was any benefit to switching.

“I can’t figure out what any of those companies do,” he replied.

I believe him. The booths are filled with broad product claims, meaningless security platitudes, and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.

Commerce requires a meeting of minds between buyer and seller, and it’s just not happening. The sellers can’t explain what they’re selling to the buyers, and the buyers don’t buy because they don’t understand what the sellers are selling. There’s a mismatch between the two; they’re so far apart that they’re barely speaking the same language.

This is a bad thing in the near term—some good companies will go bankrupt and some good security technologies won’t get deployed—but it’s a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful—database management systems, Web 2.0 collaboration tools, a company-wide network—and they want it to be secure. They don’t want to have to become IT security experts. They don’t want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing—not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?

Imagine if the inventor of antilock brakes—or any automobile safety or security feature—had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn’t. But that’s not what happens. Antilock brakes, airbags, and that annoying sensor that beeps when you’re backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn’t mean that automobile safety isn’t important, and often these new features are touted by the car manufacturers.

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products, and new start-ups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/1): Commentary.

Posted on April 22, 2008 at 6:35 AMView Comments

The Feeling and Reality of Security

Security is both a feeling and a reality, and they’re different. You can feel secure even though you’re not, and you can be secure even though you don’t feel it. There are two different concepts mapped onto the same word—the English language isn’t working very well for us here—and it can be hard to know which one we’re talking about when we use the word.

There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we’re referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.

Some fundamentals first. Viewed from the perspective of economics, security is a trade-off. There’s no such thing as absolute security, and any security you get has some cost: in money, in convenience, in capabilities, in insecurities somewhere else, whatever. Every time someone makes a decision about security—computer security, community security, national security—he makes a trade-off.

People make these trade-offs as individuals. We all get to decide, individually, if the expense and inconvenience of having a home burglar alarm is worth the security. We all get to decide if wearing a bulletproof vest is worth the cost and tacky appearance. We all get to decide if we’re getting our money’s worth from the billions of dollars we’re spending combating terrorism, and if invading Iraq was the best use of our counterterrorism resources. We might not have the power to implement our opinion, but we get to decide if we think it’s worth it.

Now we may or may not have the expertise to make those trade-offs intelligently, but we make them anyway. All of us. People have a natural intuition about security trade-offs, and we make them, large and small, dozens of times throughout the day. We can’t help it: It’s part of being alive.

Imagine a rabbit, sitting in a field eating grass. And he sees a fox. He’s going to make a security trade-off: Should he stay or should he flee? Over time, the rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.

So, as a successful species on the planet, you’d expect that human beings would be really good at making security trade-offs. Yet, at the same time, we can be hopelessly bad at it. We spend more money on terrorism than the data warrants. We fear flying and choose to drive instead. Why?

The short answer is that people make most trade-offs based on the feeling of security and not the reality.

I’ve written a lot about how people get security trade-offs wrong, and the cognitive biases that cause us to make mistakes. Humans have developed these biases because they make evolutionary sense. And most of the time, they work.

Most of the time—and this is important—our feeling of security matches the reality of security. Certainly, this is true of prehistory. Modern times are harder. Blame technology, blame the media, blame whatever. Our brains are much better optimized for the security trade-offs endemic to living in small family groups in the East African highlands in 100,000 B.C. than to those endemic to living in 2008 New York.

If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us feel more secure over security that actually makes us more secure. And that’s what governments, companies, family members and everyone else provide. Of course, there are two ways to make people feel more secure. The first is to make people actually more secure and hope they notice. The second is to make people feel more secure without making them actually more secure, and hope they don’t notice.

The key here is whether we notice. The feeling and reality of security tend to converge when we take notice, and diverge when we don’t. People notice when 1) there are enough positive and negative examples to draw a conclusion, and 2) there isn’t too much emotion clouding the issue.

Both elements are important. If someone tries to convince us to spend money on a new type of home burglar alarm, we as society will know pretty quickly if he’s got a clever security device or if he’s a charlatan; we can monitor crime rates. But if that same person advocates a new national antiterrorism system, and there weren’t any terrorist attacks before it was implemented, and there weren’t any after it was implemented, how do we know if his system was effective?

People are more likely to realistically assess these incidents if they don’t contradict preconceived notions about how the world works. For example: It’s obvious that a wall keeps people out, so arguing against building a wall across America’s southern border to keep illegal immigrants out is harder to do.

The other thing that matters is agenda. There are lots of people, politicians, companies and so on who deliberately try to manipulate your feeling of security for their own gain. They try to cause fear. They invent threats. They take minor threats and make them major. And when they talk about rare risks with only a few incidents to base an assessment on—terrorism is the big example here—they are more likely to succeed.

Unfortunately, there’s no obvious antidote. Information is important. We can’t understand security unless we understand it. But that’s not enough: Few of us really understand cancer, yet we regularly make security decisions based on its risk. What we do is accept that there are experts who understand the risks of cancer, and trust them to make the security trade-offs for us.

There are some complex feedback loops going on here, between emotion and reason, between reality and our knowledge of it, between feeling and familiarity, and between the understanding of how we reason and feel about security and our analyses and feelings. We’re never going to stop making security trade-offs based on the feeling of security, and we’re never going to completely prevent those with specific agendas from trying to take care of us. But the more we know, the better trade-offs we’ll make.

This article originally appeared on Wired.com.

Posted on April 8, 2008 at 5:50 AMView Comments

Third Annual Movie-Plot Threat Contest

I can’t believe I let April 1 come and go without posting the rules to the Third Annual Movie-Plot Threat Contest. Well, better late than never.

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

Here’s an example, pulled from page 25 of the Late Spring 2008 Skymall catalog I’m reading on my airplane right now:

A Turtle is Safe in Water, A Child is Not!

Even with the most vigilant supervision a child can disappear in seconds and not be missed until it’s too late. Our new wireless pool safety alarm system is a must for pool owners and parents of young children. The Turtle Wristband locks on the child’s wrist (a special key is required to remove it) and instantly detects immersion in water and sounds a shrill alarm at the Base Station located in the house or within 100 feet of the pool, spa, or backyard pond. Keep extra wristbands on hand for guests or to protect the family dog.

Entries are limited to 150 words—the example above had 97 words—because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

Entries will be judged on creativity, originality, persuasiveness, and plausibility. It’s okay if the product you invent doesn’t actually exist, but this isn’t a science fiction contest.

Portable salmonella detectors for salad bars. Acoustical devices that estimate tiger proximity based on roar strength. GPS-enabled wallets for use when you’ve been pickpocketed. Wrist cuffs that emit fake DNA to fool DNA detectors. The Quantum Sleeper. Fear offers endless business opportunities. Good luck.

Entries due by May 1.

The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner.

EDITED TO ADD (4/7): Submit your entry in the comments.

EDITED TO ADD (4/8): You people are frighteningly creative.

Posted on April 7, 2008 at 3:50 PMView Comments

Outsourcing Passports

The U.S. is outsourcing the manufacture of its RFID passports to some questionable companies.

This is a great illustration of the maxim “security trade-offs are often made for non-security reasons.” I can imagine the manager in charge: “Yes, it’s insecure. But think of the savings!”

The Government Printing Office’s decision to export the work has proved lucrative, allowing the agency to book more than $100 million in recent profits by charging the State Department more money for blank passports than it actually costs to make them, according to interviews with federal officials and documents obtained by The Times.

Another story.

Posted on April 2, 2008 at 6:08 AMView Comments

1 20 21 22 23 24 39

Sidebar photo of Bruce Schneier by Joe MacInnis.