Essays in the Category "Economics of Security"

Page 2 of 4

Facebook Should Compete on Privacy, Not Hide It Away

  • Bruce Schneier
  • The Guardian
  • July 15, 2009

Reassuring people about privacy makes them more, not less, concerned. It’s called “privacy salience”, and Leslie John, Alessandro Acquisti, and George Loewenstein — all at Carnegie Mellon University — demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behaviour — “Have you ever cheated on an exam?” for example. Half of the subjects were first required to sign a consent warning — designed to make privacy concerns more salient — while the other half did not. Also, subjects were randomly assigned to receive either a privacy confidentiality assurance, or no such assurance. When the privacy concern was made salient (through the consent warning), people reacted negatively to the subsequent confidentiality assurance and were less likely to reveal personal information…

Raising the Cost of Paperwork Errors Will Improve Accuracy

  • Bruce Schneier
  • The Guardian
  • June 24, 2009

It’s a sad, horrific story. Homeowner returns to find his house demolished. The demolition company was hired legitimately but there was a mistake and it demolished the wrong house. The demolition company relied on GPS co-ordinates, but requiring street addresses isn’t a solution. A typo in the address is just as likely, and it would have demolished the house just as quickly. The problem is less how the demolishers knew which house to knock down, and more how they confirmed that knowledge. They trusted the paperwork, and the paperwork was wrong. Informality works when every­body knows everybody else. When merchants and customers know each other, government officials and citizens know each other, and people know their neighbours, people know what’s going on. In that sort of milieu, if something goes wrong, people notice…

Do You Know Where Your Data Are?

  • Bruce Schneier
  • The Wall Street Journal
  • April 28, 2009

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on…

An Enterprising Criminal Has Spotted a Gap in the Market

  • Bruce Schneier
  • The Guardian
  • April 2, 2009

Before his arrest, Tom Berge stole lead roof tiles from several buildings in south-east England, including the Honeywood Museum in Carshalton, the Croydon parish church, and the Sutton high school for girls. He then sold those tiles to scrap metal dealers.

As a security expert, I find this story interesting for two reasons. First, among attempts to ban, or at least censor, Google Earth, lest it help the terrorists, here is an actual crime that relied on the service: Berge needed Google Earth for reconnaissance.

But more interesting is the discrepancy between the value of the lead tiles to the original owner and to the thief. The Sutton school had to spend £10,000 to buy new lead tiles; the Croydon Church had to repair extensive water damage after the theft. But Berge only received £700 a tonne from London scrap metal dealers…

How Perverse Incentives Drive Bad Security Decisions

  • Bruce Schneier
  • Wired
  • February 26, 2009

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons…

Here Comes Here Comes Everybody

Book Review of <cite>Here Comes Everybody: The Power of Organizing Without Organizations</cite><br />

  • Bruce Schneier
  • IEEE Spectrum
  • September 2008

In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don’t people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market’s transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people…

Boston Court's Meddling With "Full Disclosure" Is Unwelcome

  • Bruce Schneier
  • Wired
  • August 21, 2008

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start — despite facing an open-and-shut case of First Amendment prior restraint…

Why Being Open about Security Makes Us All Safer in the Long Run

  • Bruce Schneier
  • The Guardian
  • August 7, 2008

German translation

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well — Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro — and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world…

Economics, Not Apathy, Exposes Chemical Plants To Danger

  • Bruce Schneier
  • Wired
  • October 18, 2007

It’s not true that no one worries about terrorists attacking chemical plants, it’s just that our politics seem to leave us unable to deal with the threat.

Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are…

Nonsecurity Considerations in Security Decisions

  • Bruce Schneier
  • IEEE Security & Privacy
  • May/June 2007

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all…

Sidebar photo of Bruce Schneier by Joe MacInnis.