Entries Tagged "incentives"

Page 14 of 14

Bank Sued for Unauthorized Transaction

This story is interesting:

A Miami businessman is suing Bank of America over $90,000 he says was stolen from his online banking account in a case that highlights the thorny question of who is responsible when a customer’s computer is hacked into.

The typical press coverage of this story is along the lines of “Bank of America sued because customer’s PC was hacked.” But that’s not it. Bank of America is being sued because they allowed an unauthorized transaction to occur, and they’re not making good on that mistake. The transaction happened to occur because the customer’s PC was hacked.

I know nothing about the actual suit and its merits, but this is a problem that is not going away. And while I think that banks should not be held responsible for what’s on their customers’ machines, they should be held responsible for allowing unauthorized transactions to occur. The bank’s internal systems, however set up, for whatever reason, permitted the fraudulent transaction.

There is a simple economic incentive problem here. As long as the banks are not responsible for financial losses from fraudulent transactions over the Internet, banks have no incentive to improve security. But if banks are held responsible for these transactions, you can bet that they won’t allow such shoddy security.

Posted on February 9, 2005 at 8:00 AMView Comments

British Pub Hours and Crime

The Economist website (only subscribers can read the article) has an article dated January 6 that illustrates nicely the interplay between security trade-offs and economic agendas.

In the 1990s, local councils were scratching around for ideas about to how to revive Britain’s inner cities. Part of the problem was that the cities were dead after their few remaining high-street shops had shut in the evening. Bringing night-life back, it was felt, would bring back young people, and the cheerful social and economic activity they would attract would revive depressed urban areas. The “24-hour city” thus became the motto of every forward-thinking local authority.

For councils to fulfil their plans, Britain’s antiquated drinking laws needed to be liberalised. That has been happening, in stages. The liberalisation culminates in 24-hour drinking licences….

This has worked: “As an urban redevelopment policy, the liberalisation has been tremendously successful. Cities which once relied on a few desultory pubs for entertainment now have centres thumping with activity from early evening all through the night.”

On the other hand, the change comes with a cost. “That is probably why, when crime as a whole has fallen since the late 1990s, violent crime has gone up; and it is certainly why the police have joined the doctors in opposing the 24-hour licences.”

This is all perfectly reasonable. All security is a trade-off, and a community should be able to trade off the economic benefits of a revitalized urban center with the economic costs of an increased police force. Maybe they can issue 24-hour licenses to only a few pubs. Or maybe they can issue 22-hour licenses, or licenses for some other number of hours. Certainly there is a solution that balances the two issues.

But the organization that has to pay the security costs for the program (the police) is not the same as the organization that reaps the benefits (the local governments).

Over the past hundred years, central government’s thirst for power has weakened the local authorities. As a result, policing, which should be a local issue, is largely paid for by central government. So councils, who are largely responsible for licensing, do not pay for the negative consequences of liberalisation.

The result is that the local councils don’t care about the police costs, and consequently make bad security trade-offs.

Posted on January 12, 2005 at 9:01 AMView Comments

Fingerprinting Students

A nascent security trend in the U.S. is tracking schoolchildren when they get on and off school buses.

Hoping to prevent the loss of a child through kidnapping or more innocent circumstances, a few schools have begun monitoring student arrivals and departures using technology similar to that used to track livestock and pallets of retail shipments.

A school district in Spring, Texas, is using computerized ID badges to record this information, and wirelessly sending it to police headquarters. Another school district, in Phoenix, is doing the same thing with fingerprint readers. The system is supposed to help prevent the loss of a child, whether through kidnapping or accident.

What’s going on here? Have these people lost their minds? Tracking kids as they get on and off school buses is a ridiculous idea. It’s expensive, invasive, and doesn’t increase security very much.

Security is always a trade-off. In Beyond Fear, I delineated a five-step process to evaluate security countermeasures. The idea is to be able to determine, rationally, whether a countermeasure is worth it. In the book, I applied the five-step process to everything from home burglar alarms to military action against terrorism. Let’s apply it in this case.

Step 1: What assets are you trying to protect? Children.

Step 2: What are the risks to these assets? Loss of the child, either due to kidnapping or accident. Child kidnapping is a serious problem in the U.S.; the odds of a child being abducted by a family member are one in 340 and by a non-family member are 1 in 1200 (per year). (These statistics are for 1999, and are from NISMART-2, U.S. Department of Justice. My guess is that the current rates in Spring, Texas, are much lower.) Very few of these kidnappings involve school buses, so it’s unclear how serious the specific risks being addressed here are.

Step 3: How well does the security solution mitigate those risks? Not very well.

Let’s imagine how this system might provide security in the event of a kidnapping. If a kidnapper—assume it’s someone the child knows—goes onto the school bus and takes the child off at the wrong stop, the system would record that. Otherwise—if the kidnapping took place either before the child got on the bus or after the child got off—the system wouldn’t record anything suspicious. Yes, it would tell investigators if the kidnapping happened before morning attendance and either before or after the school bus ride, but is that one piece of information worth this entire tracking system? I doubt it.

You could imagine a movie-plot scenario where this kind of tracking system could help the hero recover the kidnapped child, but it hardly seems useful in the general case.

Step 4: What other risks does the security solution cause? The additional risk is the data collected through constant surveillance. Where is this information collected? Who has access to it? How long is it stored? These are important security questions that get no mention.

Step 5: What costs and trade-offs does the security solution impose? There are two. The first is obvious: money. I don’t have it figured, but it’s expensive to outfit every child with an ID card and every school bus with this system. The second cost is more intangible: a loss of privacy. We are raising children who think it normal that their daily movements are watched and recorded by the police. That feeling of privacy is not something we should give up lightly.

So, finally: is this system worth it? No. The security gained is not worth the money and privacy spent. If the goal is to make children safer, the money would be better spent elsewhere: guards at the schools, education programs for the children, etc.

If this system makes so little sense, why have at least two cities in the U.S. implemented it? The obvious answer is that the school districts didn’t think the problem through. Either they were seduced by the technology, or by the companies that built the system. But there’s another, more interesting, possibility.

In Beyond Fear, I talk about the notion of agenda. The five-step process is a subjective one, and should be evaluated from the point of view of the person making the trade-off decision. If you imagine that the school officials are making the trade-off, then the system suddenly makes sense.

If a kidnapping occurs on school property, the subsequent investigation could easily hurt school officials. They could even lose their jobs. If you view this security countermeasure as one protecting them just as much as it protects children, it suddenly makes more sense. The trade-off might not be worth it in general, but it’s worth it to them.

Kidnapping is a real problem, and countermeasures that help reduce the risk are a good thing. But remember that security is always a trade off, and a good security system is one where the security benefits are worth the money, convenience, and liberties that are being given up. Quite simply, this system isn’t worth it.

Posted on January 11, 2005 at 9:49 AMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

Do Terror Alerts Work?

As I read the litany of terror threat warnings that the government has issued in the past three years, the thing that jumps out at me is how vague they are. The careful wording implies everything without actually saying anything. We hear “terrorists might try to bomb buses and rail lines in major U.S. cities this summer,” and there’s “increasing concern about the possibility of a major terrorist attack.” “At least one of these attacks could be executed by the end of the summer 2003.” Warnings are based on “uncorroborated intelligence,” and issued even though “there is no credible, specific information about targets or method of attack.” And, of course, “weapons of mass destruction, including those containing chemical, biological, or radiological agents or materials, cannot be discounted.”

Terrorists might carry out their attacks using cropdusters, helicopters, scuba divers, even prescription drugs from Canada. They might be carrying almanacs. They might strike during the Christmas season, disrupt the “democratic process,” or target financial buildings in New York and Washington.

It’s been more than two years since the government instituted a color-coded terror alert system, and the Department of Homeland Security has issued about a dozen terror alerts in that time. How effective have they been in preventing terrorism? Have they made us any safer, or are they causing harm? Are they, as critics claim, just a political ploy?

When Attorney General John Ashcroft came to Minnesota recently, he said the fact that there had been no terrorist attacks in America in the three years since September 11th was proof that the Bush administration’s anti-terrorist policies were working. I thought: There were no terrorist attacks in America in the three years before September 11th, and we didn’t have any terror alerts. What does that prove?

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans—as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

It’s true too that people want to make their own decisions. Regardless of what the government suggests, people are going to independently assess the situation. They’re going to decide for themselves whether or not changing their behavior seems like a good idea. If there’s no rational information to base their independent assessment on, they’re going to come to conclusions based on fear, prejudice, or ignorance.

We’re already seeing this in the U.S. We see it when Muslim men are assaulted on the street. We see it when a woman on an airplane panics because a Syrian pop group is flying with her. We see it again and again, as people react to rumors about terrorist threats from Al Qaeda and its allies endlessly repeated by the news media.

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared. And maddeningly, the current administration occasionally compromises the intelligence assets it does have, in the interest of politics. It recently released the name of a Pakistani agent working undercover in Al Qaeda, blowing ongoing counterterrorist operations both in Pakistan and the U.K.

Still, ironically, most of the time the administration projects a “just trust me” attitude. And there are those in the U.S. who trust it, and there are those who do not. Unfortunately, there are good reasons not to trust it. There are two reasons government likes terror alerts. Both are self-serving, and neither has anything to do with security.

The first is such a common impulse of bureaucratic self-protection that it has achieved a popular acronym in government circles: CYA. If the worst happens and another attack occurs, the American public isn’t going to be as sympathetic to the current administration as it was last time. After the September 11th attacks, the public reaction was primarily shock and disbelief. In response, the government vowed to fight the terrorists. They passed the draconian USA PATRIOT Act, invaded two countries, and spent hundreds of billions of dollars. Next time, the public reaction will quickly turn into anger, and those in charge will need to explain why they failed. The public is going to demand to know what the government knew and why it didn’t warn people, and they’re not going to look kindly on someone who says: “We didn’t think the threat was serious enough to warn people.” Issuing threat warnings is a way to cover themselves. “What did you expect?” they’ll say. “We told you it was Code Orange.”

The second purpose is even more self-serving: Terror threat warnings are a publicity tool. They’re a method of keeping terrorism in people’s minds. Terrorist attacks on American soil are rare, and unless the topic stays in the news, people will move on to other concerns. There is, of course, a hierarchy to these things. Threats against U.S. soil are most important, threats against Americans abroad are next, and terrorist threats—even actual terrorist attacks—against foreigners in foreign countries are largely ignored.

Since the September 11th attacks, Republicans have made “tough on terror” the centerpiece of their reelection strategies. Study after study has shown that Americans who are worried about terrorism are more likely to vote Republican. In 2002, Karl Rove specifically told Republican legislators to run on that platform, and strength in the face of the terrorist threat is the basis of Bush’s reelection campaign. For that strategy to work, people need to be reminded constantly about the terrorist threat and how the current government is keeping them safe.

It has to be the right terrorist threat, though. Last month someone exploded a pipe bomb in a stem-cell research center near Boston, but the administration didn’t denounce this as a terrorist attack. In April 2003, the FBI disrupted a major terrorist plot in the U.S., arresting William Krar and seizing automatic weapons, pipe bombs, bombs disguised as briefcases, and at least one cyanide bomb—an actual chemical weapon. But because Krar was a member of a white supremacist group and not Muslim, Ashcroft didn’t hold a press conference, Tom Ridge didn’t announce how secure the homeland was, and Bush never mentioned it.

Threat warnings can be a potent tool in the fight against terrorism—when there is a specific threat at a specific moment. There are times when people need to act, and act quickly, in order to increase security. But this is a tool that can easily be abused, and when it’s abused it loses its effectiveness.

It’s instructive to look at the European countries that have been dealing with terrorism for decades, like the United Kingdom, Ireland, France, Italy, and Spain. None of these has a color-coded terror-alert system. None calls a press conference on the strength of “chatter.” Even Israel, which has seen more terrorism than any other nation in the world, issues terror alerts only when there is a specific imminent attack and they need people to be vigilant. And these alerts include specific times and places, with details people can use immediately. They’re not dissimilar from hurricane warnings.

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

Fighting global terrorism is difficult, and it’s not something that should be played for political gain. Countries that have been dealing with terrorism for decades have realized that much of the real work happens outside of public view, and that often the most important victories are the most secret. The elected officials of these countries take the time to explain this to their citizens, who in return have a realistic view of what the government can and can’t do to keep them safe.

By making terrorism the centerpiece of his reelection campaign, President Bush and the Republicans play a very dangerous game. They’re making many people needlessly fearful. They’re attracting the ridicule of others, both domestically and abroad. And they’re distracting themselves from the serious business of actually keeping Americans safe.

This article was originally published in the October 2004 edition of The Rake

Posted on October 4, 2004 at 7:08 PMView Comments

1 12 13 14

Sidebar photo of Bruce Schneier by Joe MacInnis.