Entries Tagged "economics of security"

Page 18 of 39

Stealing Commodities

Before his arrest, Tom Berge stole lead roof tiles from several buildings in south-east England, including the Honeywood Museum in Carshalton, the Croydon parish church, and the Sutton high school for girls. He then sold those tiles to scrap metal dealers.

As a security expert, I find this story interesting for two reasons. First, amongst increasingly ridiculous attempts to ban, or at least censor, Google Earth, lest it help the terrorists, here is an actual crime that relied on the service: Berge needed Google Earth for reconnaissance.

But more interesting is the discrepancy between the value of the lead tiles to the original owner and to the thief. The Sutton school had to spend £10,000 to buy new lead tiles; the Croydon Church had to repair extensive water damage after the theft. But Berge only received £700 a ton from London scrap metal dealers.

This isn’t an isolated story; the same dynamic is in play with other commodities as well.

There is an epidemic of copper wiring thefts worldwide; copper is being stolen out of telephone and power stations—and off poles in the streets—and thieves have killed themselves because they didn’t understand the dangers of high voltage. Homeowners are returning from holiday to find the copper pipes stolen from their houses. In 2001, scrap copper was worth 70 cents per pound. In April 2008, it was worth $4.

Gasoline siphoning became more common as pump prices rose. And used restaurant grease, formerly either given away or sold for pennies to farmers, is being stolen from restaurant parking lots and turned into biofuels. Newspapers and other recyclables are stolen from curbs, and trees are stolen and resold as Christmas trees.

Iron fences have been stolen from buildings and houses, manhole covers have been stolen from the middle of streets, and aluminum guard rails have been stolen from roadways. Steel is being stolen for scrap, too. In 2004 in Ukraine, thieves stole an entire steel bridge.

These crimes are particularly expensive to society because the replacement cost is much higher than the thief’s profit. A manhole cover is worth $5–$10 as scrap, but it costs $500 to replace, including labor. A thief may take $20 worth of copper from a construction site, but do $10,000 in damage in the process. And even if the thieves don’t get to the copper or steel, the increased threat means more money being spent on security to protect those commodities in the first place.

Security can be viewed as a tax on the honest, and these thefts demonstrate that our taxes are going up. And unlike many taxes, we don’t benefit from their collection. The cost to society of retrofitting manhole covers with locks, or replacing them with less resalable alternatives, is high; but there is no benefit other than reducing theft.

These crimes are a harbinger of the future: evolutionary pressure on our society, if you will. Criminals are often referred to as social parasites; they leech off society but provide no useful benefit. But they are an early warning system of societal changes. Unfettered by laws or moral restrictions, they can be the first to respond to changes that the rest of society will be slower to pick up on. In fact, currently there’s a reprieve. Scrap metal prices are all down from last year’s—copper is currently $1.62 per pound, and lead is half what Berge got—and thefts are down along with them.

We’ve designed much of our infrastructure around the assumptions that commodities are cheap and theft is rare. We don’t protect transmission lines, manhole covers, iron fences, or lead flashing on roofs. But if commodity prices really are headed for new higher stable points, society will eventually react and find alternatives for these items—or find ways to protect them. Criminals were the first to point this out, and will continue to exploit the system until it restabilizes.

A version of this essay originally appeared in The Guardian.

Posted on April 3, 2009 at 5:25 AMView Comments

The Zone of Essential Risk

Bob Blakley makes an interesting point. It’s in the context of eBay fraud, but it’s more general than that.

If you conduct infrequent transactions which are also small, you’ll never lose much money and it’s not worth it to try to protect yourself – you’ll sometimes get scammed, but you’ll have no trouble affording the losses.

If you conduct large transactions, regardless of frequency, each transaction is big enough that it makes sense to insure the transactions or pay an escrow agent. You’ll have occasional experiences of fraud, but you’ll be reimbursed by the insurer or the transactions will be reversed by the escrow agent and you don’t lose anything.

If you conduct small or medium-sized transactions frequently, you can amortize fraud losses using the gains from your other transactions. This is how casinos work; they sometimes lose a hand, but they make it up in the volume.

But if you conduct medium-sized transactions rarely, you’re in trouble. The transactions are big enough so that you care about losses, you don’t have enough transaction volume to amortize those losses, and the cost of insurance or escrow is high enough compared to the value of your transactions that it doesn’t make economic sense to protect yourself.

Posted on March 30, 2009 at 6:50 AMView Comments

Fear and the Availability Heuristic

Psychology Today on fear and the availability heuristic:

We use the availability heuristic to estimate the frequency of specific events. For example, how often are people killed by mass murderers? Because higher frequency events are more likely to occur at any given moment, we also use the availability heuristic to estimate the probability that events will occur. For example, what is the probability that I will be killed by a mass murderer tomorrow?

We are especially reliant upon the availability heuristic when we do not have solid evidence from which to base our estimates. For example, what is the probability that the next plane you fly on will crash? The true probability of any particular plane crashing depends on a huge number of factors, most of which you’re not aware of and/or don’t have reliable data on. What type of plane is it? What time of day is the flight? What is the weather like? What is the safety history of this particular plane? When was the last time the plane was examined for problems? Who did the examination and how thorough was it? Who is flying the plane? How much sleep did they get last night? How old are they? Are they taking any medications? You get the idea.

The chances are excellent that you do not have access to all or even most of the information needed to make accurate estimates for just about anything. Indeed, you probably have little or no data from which to base your estimate. Well, that’s not exactly true. In fact, there is one piece that evidence that you always have access to: your memory. Specifically, how easily can you recall previous incidents of the event in question? The easier time we have recalling prior incidents, the greater probability the event has of occurring—at least as far as our minds are concerned. In a nutshell, this is the availability heuristic.

[…]

Although there are many problems associated with the availability heuristic, perhaps the most concerning one is that it often leads people to lose sight of life’s real dangers. Psychologist Gerd Gigerenzer, for example, conducted a fascinating study that showed in the months following September 11, 2001, Americans were less likely to travel by air and more likely to instead travel by car. While it is understandable why Americans would have been fearful of air travel following the incredibly high profile attacks on New York and Washington, the unfortunate result is that Americans died on the highways at alarming rates following 9/11. This is because highway travel is far more dangerous than air travel. More than 40,000 Americans are killed every year on America’s roads. Fewer than 1,000 people die in airplane accidents, and even fewer people are killed aboard commercial airlines.

[…]

Consider, for example, that the 2009 budget for homeland security (the folks that protect us from terrorists) will likely be about $50 billion. Don’t get us wrong, we like the fact that people are trying to prevent terrorism, but even at its absolute worst, terrorists killed about 3,000 Americans in a single year. And less than 100 Americans are killed by terrorists in most years. By contrast, the budget for the National Highway Traffic Safety Administration (the folks who protect us on the road) is about $1 billion, even though more than 40,000 people will die this year on the nation’s roads. In terms of dollars spent per fatality, we fund terrorism prevention at about $17,000,000/fatality (i.e., $50 billion/3,000 fatalities) and accident prevention at about $25,000/fatality (i.e., $1 billion/40,000 fatalities).

I’ve written about this sort of thing here.

Posted on March 23, 2009 at 12:31 PMView Comments

Why People Steal Rare Books

Interesting analysis:

“Book theft is very hard to quantify because very often pages are cut and it’s not noticed for years,” says Rapley. “Often we come across pages from books [in hauls of recovered property] and we work back from there.” The Museum Security Network, a Dutch-based, not-for-profit organisation devoted to co-ordinating efforts to combat this type of theft, estimates that only 2 to 5 per cent of stolen books are recovered, compared with about half of stolen paintings.

“Books are extremely difficult to identify,” Rapley continues. “That means they can be sold commercially at near to market value rather than black-market value.” Thieves know that single pages cut from books to be sold as prints are easier to steal and even harder to trace, so they are often even more desirable than books themselves.

Most thieves simply cut out pages with razor blades and then hide them about their person. High bookshelves, quiet stacks or storage areas, or any lavatories located within reading rooms, are obvious places for such nefarious activities.

Regular users will have noticed that libraries have tightened up security in recent years. Among the strategies employed are CCTV cameras, improved sightlines for librarians, ID and bag checks at entrances and exits, and more floorwalking by security, uniformed or otherwise.

Posted on March 20, 2009 at 6:24 AMView Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

More European Chip and Pin Insecurity

Optimised to Fail: Card Readers for Online Banking,” by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

Abstract

The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

EDITED TO ADD (3/12): More info.

Posted on March 5, 2009 at 12:45 PMView Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AMView Comments

Melissa Hathaway Interview

President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation’s cybersecurity policies.

Who is she?

Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI’s implementation.

Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:

In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation’s infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation’s broader needs, so we’ll need a new information-sharing environment. Private-sector risk models aren’t congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.

Increasingly, we see industry jointly addressing these vulnerabilities, such as with the Industry Consortium for Advancement of Security on the Internet to share common vulnerabilities and response mechanisms. In addition, there’s the Software Assurance Forum for Excellence in Code, an alliance of vendors who seek to improve software security. Industry is beginning to understand that [it has a] shared risk and shared responsibilities and sees the advantage of coordinating and collaborating up front during the development stage, so that we can start to address vulnerabilities from day one. We also need to look for niche partnerships to enhance product development and build trust into components. We need to understand when and how we introduce risk into the system and ask ourselves whether that risk is something we can live with.

The government is using its purchasing power to influence the market toward better security. We’re already seeing results with the Federal Desktop Core Configuration [FDCC] initiative, a mandated security configuration for federal computers set by the OMB. The Department of Commerce is working with several IT vendors on standardizing security settings for a wide variety of IT products and environments. Because a broad population of the government is using Windows XP and Vista, the FDCC imitative worked with Microsoft and others to determine security needs up front.

Posted on February 24, 2009 at 12:36 PMView Comments

Balancing Security and Usability in Authentication

Since January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?

Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won’t be able to—or won’t bother to—use it.

Passwords are the most common authentication system, and a good place to start. They’re very easy to implement and use, which is why they’re so popular. But as computers have become faster, password guessing has become easier. Most people don’t choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.

To combat password guessing, many systems force users to choose harder-to-guess passwords—requiring minimum lengths, non alpha-numeric characters, etc.—and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.

Users forgetting their passwords can be expensive—sysadmins or customer service reps have to field phone calls and reset passwords—so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother’s maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.

A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token—”something you have”—is combined with a password—”something you know.” But it’s less usable, because the tokens have to be purchased and distributed to all users, and far too often it’s “something you lost or forgot.” And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them—sometimes only as an option, because people don’t like them.

In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem—or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn’t survive one week.

Balancing security and usability is hard, and many organizations get it wrong. But it’s also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.

A version of this essay was originally published in The Guardian.

Posted on February 19, 2009 at 1:44 PMView Comments

Cost of the U.S. No-Fly List

Someone did the analysis:

As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million.

Posted on February 3, 2009 at 1:01 PMView Comments

1 16 17 18 19 20 39

Sidebar photo of Bruce Schneier by Joe MacInnis.