Entries Tagged "economics of security"

Page 38 of 38

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

World Series Security

The World Series is no stranger to security. Fans try to sneak into the ballpark without tickets, or with counterfeit tickets. Often foods and alcohol are prohibited from being brought into the ballpark, to enforce the monopoly of the high-priced concessions. Violence is always a risk: both small fights and larger-scale riots that result from fans from both teams being in such close proximity—like the one that almost happened during the sixth game of the AL series.

Today, the new risk is terrorism. Security at the Olympics cost $1.5 billion. $50 million each was spent at the Democratic and Republican conventions. There has been no public statement about the security bill for the World Series, but it’s reasonable to assume it will be impressive.

In our fervor to defend ourselves, it’s important that we spend our money wisely. Much of what people think of as security against terrorism doesn’t actually make us safer. Even in a world of high-tech security, the most important solution is the guy watching to keep beer bottles from being thrown onto the field.

Generally, security measures that defend specific targets are wasteful, because they can be avoided simply by switching targets. If we completely defend the World Series from attack, and the terrorists bomb a crowded shopping mall instead, little has been gained.

Even so, some high-profile locations, like national monuments and symbolic buildings, and some high-profile events, like political conventions and championship sporting events, warrant additional security. What additional measures make sense?

ID checks don’t make sense. Everyone has an ID. Even the 9/11 terrorists had IDs. What we want is to somehow check intention; is the person going to do something bad? But we can’t do that, so we check IDs instead. It’s a complete waste of time and money, and does absolutely nothing to make us safer.

Automatic face recognition systems don’t work. Computers that automatically pick terrorists out of crowds are a great movie plot device, but doesn’t work in the real world. We don’t have a comprehensive photographic database of known terrorists. Even worse, the face recognition technology is so faulty that it often can’t make the matches even when we do have decent photographs. We tried it at the 2001 Super Bowl; it was a failure.

Airport-like attendee screening doesn’t work. The terrorists who took over the Russian school sneaked their weapons in long before their attack. And screening fans is only a small part of the solution. There are simply too many people, vehicles, and supplies moving in and out of a ballpark regularly. This kind of security failed at the Olympics, as reporters proved again and again that they could sneak all sorts of things into the stadiums undetected.

What does work is people: smart security officials watching the crowds. It’s called “behavior recognition,�? and it requires trained personnel looking for suspicious behavior. Does someone look out of place? Is he nervous, and not watching the game? Is he not cheering, hissing, booing, and waving like a sports fan would?

This is what good policemen do all the time. It’s what Israeli airport security does. It works because instead of relying on checkpoints that can be bypassed, it relies on the human ability to notice something that just doesn’t feel right. It’s intuition, and it’s far more effective than computerized security solutions.

Will this result in perfect security? Of course not. No security measures are guaranteed; all we can do is reduce the odds. And the best way to do that is to pay attention. A few hundred plainclothes policemen, walking around the stadium and watching for anything suspicious, will provide more security against terrorism than almost anything else we can reasonably do.

And the best thing about policemen is that they’re adaptable. They can deal with terrorist threats, and they can deal with more common security issues, too.

Most of the threats at the World Series have nothing to do with terrorism; unruly or violent fans are a much more common problem. And more likely than a complex 9/11-like plot is a lone terrorist with a gun, a bomb, or something that will cause panic. But luckily, the security measures ballparks have already put in place to protect against the former also help protect against the latter.

Originally published by UPI.

Posted on October 25, 2004 at 6:31 PMView Comments

Technology and Counterterrorism

Technology makes us safer.

Communications technologies ensure that emergency response personnel can communicate with each other in an emergency—whether police, fire or medical. Bomb-sniffing machines now routinely scan airplane baggage. Other technologies may someday detect contaminants in our water supply or our atmosphere.

Throughout law enforcement and intelligence investigation, different technologies are being harnessed for the good of defense. However, technologies designed to secure specific targets have a limited value.

By its very nature, defense against terrorism means we must be prepared for anything. This makes it expensive—if not nearly impossible—to deploy threat-specific technological advances at all the places where they’re likely needed. So while it’s good to have bomb-detection devices in airports and bioweapon detectors in crowded subways, defensive technology cannot be applied at every conceivable target for every conceivable threat. If we spent billions of dollars securing airports and the terrorists shifted their attacks to shopping malls, we wouldn’t gain any security as a society.

It’s far more effective to try and mitigate the general threat. For example, technologies that improve intelligence gathering and analysis could help federal agents quickly chase down information about suspected terrorists. The technologies could help agents more rapidly uncover terrorist plots of any type and aimed at any target, from nuclear plants to the food supply. In addition, technologies that foster communication, coordination and emergency response could reduce the effects of a terrorist attack, regardless of what form the attack takes. We get the most value for our security dollar when we can leverage technology to extend the capabilities of humans.

Just as terrorists can use technology more or less wisely, we as defenders can do the same. It is only by keeping in mind the strengths and limitations of technology that we can increase our security without wasting money, freedoms or civil liberties, and without making ourselves more vulnerable to other threats. Security is a trade-off, and it is important that we use technologies that enable us to make better trade-offs and not worse ones.

Originally published on CNet

Posted on October 20, 2004 at 4:35 PMView Comments

1 36 37 38

Sidebar photo of Bruce Schneier by Joe MacInnis.