Entries Tagged "incentives"

Page 8 of 14

Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

1933 Anti-Spam Doorbell

Here’s a great description of an anti-spam doorbell from 1933. A visitor had to deposit a dime into a slot to make the doorbell ring. If the homeowner appreciated the visit, he would return the dime. Otherwise, the dime became the cost of disturbing the homeowner.

This kind of system has been proposed for e-mail as well: the sender has to pay the receiver—or someone else in the system—a nominal amount for each e-mail sent. This money is returned if the e-mail is wanted, and forfeited if it is spam. The result would be to raise the cost of sending spam to the point where it is uneconomical.

I think it’s worth comparing the two systems—the doorbell system and the e-mail system—to demonstrate why it won’t work for spam.

The doorbell system fails for three reasons: the percentage of annoying visitors is small enough to make the system largely unnecessary, visitors don’t generally have dimes on them (presumably fixable if the system becomes ubiquitous), and it’s too easy to successfully bypass the system by knocking (not true for an apartment building).

The anti-spam system doesn’t suffer from the first two problems: spam is an enormous percentage of total e-mail, and an automated accounting system makes the financial mechanics easy. But the anti-spam system is too easy to bypass, and it’s too easy to hack. And once you set up a financial system, you’re simply inviting hacks.

The anti-spam system fails because spammers don’t have to send e-mail directly—they can take over innocent computers and send it from them. So it’s the people whose computers have been hacked into, victims in their own right, who will end up paying for spam. This risk can be limited by letting people put an upper limit on the money in their accounts, but it is still serious.

And criminals can exploit the system in the other direction, too. They could hack into innocent computers and have them send “spam” to their email addresses, collecting money in the process.

Trying to impose some sort of economic penalty on unwanted e-mail is a good idea, but it won’t work unless the endpoints are trusted. And we’re nowhere near that trust today.

Posted on May 10, 2007 at 5:57 AMView Comments

Sponsor-Only Security at the 2012 London Olympics

If you want your security technology to be considered for the London Olympics, you have to be a major sponsor of the event.

…he casually revealed that because neither of these companies was a ‘major sponsor’ of the Olympics their technology could not be used.

Yes, you read that right, as far as the technology behind the security of the London Olympic Games is concerned best of breed and suitability for purpose do not come into, paying a large amount of money to the International Olympic Committee does.

I have repeatedly said that security is generally only part of a larger context, but this borders on ridiculous.

Posted on April 30, 2007 at 5:55 AMView Comments

Private Police Forces

In Raleigh, N.C., employees of Capitol Special Police patrol apartment buildings, a bowling alley and nightclubs, stopping suspicious people, searching their cars and making arrests.

Sounds like a good thing, but Capitol Special Police isn’t a police force at all—it’s a for-profit security company hired by private property owners.

This isn’t unique. Private security guards outnumber real police more than 5-1, and increasingly act like them.

They wear uniforms, carry weapons and drive lighted patrol cars on private properties like banks and apartment complexes and in public areas like bus stations and national monuments. Sometimes they operate as ordinary citizens and can only make citizen’s arrests, but in more and more states they’re being granted official police powers.

This trend should greatly concern citizens. Law enforcement should be a government function, and privatizing it puts us all at risk.

Most obviously, there’s the problem of agenda. Public police forces are charged with protecting the citizens of the cities and towns over which they have jurisdiction. Of course, there are instances of policemen overstepping their bounds, but these are exceptions, and the police officers and departments are ultimately responsible to the public.

Private police officers are different. They don’t work for us; they work for corporations. They’re focused on the priorities of their employers or the companies that hire them. They’re less concerned with due process, public safety and civil rights.

Also, many of the laws that protect us from police abuse do not apply to the private sector. Constitutional safeguards that regulate police conduct, interrogation and evidence collection do not apply to private individuals. Information that is illegal for the government to collect about you can be collected by commercial data brokers, then purchased by the police.

We’ve all seen policemen “reading people their rights” on television cop shows. If you’re detained by a private security guard, you don’t have nearly as many rights.

For example, a federal law known as Section 1983 allows you to sue for civil rights violations by the police but not by private citizens. The Freedom of Information Act allows us to learn what government law enforcement is doing, but the law doesn’t apply to private individuals and companies. In fact, most of your civil right protections apply only to real police.

Training and regulation is another problem. Private security guards often receive minimal training, if any. They don’t graduate from police academies. And while some states regulate these guard companies, others have no regulations at all: anyone can put on a uniform and play policeman. Abuses of power, brutality, and illegal behavior are much more common among private security guards than real police.

A horrific example of this happened in South Carolina in 1995. Ricky Coleman, an unlicensed and untrained Best Buy security guard with a violent criminal record, choked a fraud suspect to death while another security guard held him down.

This trend is larger than police. More and more of our nation’s prisons are being run by for-profit corporations. The IRS has started outsourcing some back-tax collection to debt-collection companies that will take a percentage of the money recovered as their fee. And there are about 20,000 private police and military personnel in Iraq, working for the Defense Department.

Throughout most of history, specific people were charged by those in power to keep the peace, collect taxes and wage wars. Corruption and incompetence were the norm, and justice was scarce. It is for this very reason that, since the 1600s, European governments have been built around a professional civil service to both enforce the laws and protect rights.

Private security guards turn this bedrock principle of modern government on its head. Whether it’s FedEx policemen in Tennessee who can request search warrants and make arrests; a privately funded surveillance helicopter in Jackson, Miss., that can bypass constitutional restrictions on aerial spying; or employees of Capitol Special Police in North Carolina who are lobbying to expand their jurisdiction beyond the specific properties they protect—privately funded policemen are not protecting us or working in our best interests.

This op ed originally appeared in the Minneapolis Star-Tribune.

EDITED TO ADD (4/2): This is relevant.

Posted on February 27, 2007 at 6:02 AMView Comments

CYA Security

Since 9/11, we’ve spent hundreds of billions of dollars defending ourselves from terrorist attacks. Stories about the ineffectiveness of many of these security measures are common, but less so are discussions of why they are so ineffective. In short: much of our country’s counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.

Boston, January 31: As part of a guerilla marketing campaign, a series of amateur-looking blinking signs depicting characters in the Aqua Teen Hunger Force, a show on the Cartoon Network, were placed on bridges, near a medical center, underneath an interstate highway, and in other crowded public places.

Police mistook these signs for bombs and shut down parts of the city, eventually spending over $1M sorting it out. Authorities blasted the stunt as a terrorist hoax, while others ridiculed the Boston authorities for overreacting. Almost no one looked beyond the finger pointing and jeering to discuss exactly why the Boston authorities overreacted so badly. They overreacted because the signs were weird.

If someone left a backpack full of explosives in a crowded movie theater, or detonated a truck bomb in the middle of a tunnel, no one would demand to know why the police hadn’t noticed it beforehand. But if a weird device with blinking lights and wires turned out to be a bomb—what every movie bomb looks like—there would be inquiries and demands for resignations. It took the police two weeks to notice the Mooninite blinkies, but once they did, they overreacted because their jobs were at stake.

This is “Cover Your Ass” security, and unfortunately it’s very common.

Airplane security seems to forever be looking backwards. Pre-9/11, it was bombs, guns, and knives. Then it was small blades and box cutters. Richard Reid tried to blow up a plane, and suddenly we all have to take off our shoes. And after last summer’s liquid plot, we’re stuck with a series of nonsensical bans on liquids and gels.

Once you think about this in terms of CYA, it starts to make sense. The TSA wants to be sure that if there’s another airplane terrorist attack, it’s not held responsible for letting it slip through. One year ago, no one could blame the TSA for not detecting liquids. But since everything seems obvious in hindsight, it’s basic job preservation to defend against what the terrorists tried last time.

We saw this kind of CYA security when Boston and New York randomly checked bags on the subways after the London bombing, or when buildings started sprouting concrete barriers after the Oklahoma City bombing. We also see it in ineffective attempts to detect nuclear bombs; authorities employ CYA security against the media-driven threat so they can say “we tried.”

At the same time, we’re ignoring threat possibilities that don’t make the news as much—against chemical plants, for example. But if there were ever an attack, that would change quickly.

CYA also explains the TSA’s inability to take anyone off the no-fly list, no matter how innocent. No one is willing to risk his career on removing someone from the no-fly list who might—no matter how remote the possibility—turn out to be the next terrorist mastermind.

Another form of CYA security is the overly specific countermeasures we see during big events like the Olympics and the Oscars, or in protecting small towns. In all those cases, those in charge of the specific security don’t dare return the money with a message “use this for more effective general countermeasures.” If they were wrong and something happened, they’d lose their jobs.

And finally, we’re seeing CYA security on the national level, from our politicians. We might be better off as a nation funding intelligence gathering and Arabic translators, but it’s a better re-election strategy to fund something visible but ineffective, like a national ID card or a wall between the U.S. and Mexico.

Securing our nation from threats that are weird, threats that either happened before or captured the media’s imagination, and overly specific threats are all examples of CYA security. It happens not because the authorities involved—the Boston police, the TSA, and so on—are not competent, or not doing their job. It happens because there isn’t sufficient national oversight, planning, and coordination.

People and organizations respond to incentives. We can’t expect the Boston police, the TSA, the guy who runs security for the Oscars, or local public officials to balance their own security needs against the security of the nation. They’re all going to respond to the particular incentives imposed from above. What we need is a coherent antiterrorism policy at the national level: one based on real threat assessments, instead of fear-mongering, re-election strategies, or pork-barrel politics.

Sadly, though, there might not be a solution. All the money is in fear-mongering, re-election strategies, and pork-barrel politics. And, like so many things, security follows the money.

This essay originally appeared on Wired.com.

EDITED TO ADD (2/23): Interesting commentary, and a Slashdot thread.

Posted on February 22, 2007 at 5:52 AMView Comments

Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AMView Comments

Wal-Mart Stays Open During Bomb Scare

This is interesting: A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb. Presumably, they decided that the loss of revenue due to an evacuation was not worth the additional security of an evacuation:

During the nearly two-hour search Wal-Mart officials opted not to evacuated the busy discount store even though police recomended [sic] they do so. Wal-Mart officials said the call was a hoax and not a threat.

I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.

Remember, though: security trade-offs are based on agenda. From the perspective of the Wal-Mart managers, the store’s revenues are the most important; most of the risks of the bomb threat are externalities.

Of course, the store employees have a different agenda—there is no upside to staying open, and only a downside due to the additional risk—and they didn’t like the decision:

The incident has family members of Wal-Mart employees criticizing store officials for failing to take police’s recommendation to evacuate.

Voorhees has worked at the Mitchell discount chain since Wal-Mart Supercenter opened in 2001. Her daughter, Charlotte Goode, 36, said Voorhees called her Sunday, crying and upset as she relayed the story.

“It’s right before Christmas. They were swamped with people,” she said. “To me, they endangerd [sic] the community, customers and associates. They put making a buck ahead of public safety.”

Posted on December 28, 2006 at 1:32 PMView Comments

Tracking Automobiles Through their Tires

Automobile tires are now being outfitted with RFID transmitters:

Schrader Bridgeport is the market leader in direct Tire Pressure Monitoring Systems. Direct TPMS use pressure sensors inside each tire to transmit data to a dashboard display alerting drivers to tire pressure problems.

I’ll bet anything you can track cars with them, just as you can track some joggers by their sneakers.

As I said before, the people who are designing these systems are putting “zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.”

Posted on December 27, 2006 at 7:44 AMView Comments

Tracking People by their Sneakers

Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.

However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Details are in the paper. Very scary.

This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It’s a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.

To me, the real significance of this work is how easy it was. The people who designed the Nike/iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.

Posted on December 12, 2006 at 1:11 PMView Comments

1 6 7 8 9 10 14

Sidebar photo of Bruce Schneier by Joe MacInnis.