Improved Security Requires IT Diversity
In his recently released book, Beyond Fear: Thinking Sensibly About Security in an Uncertain World (Copernicus Books, 2003), security guru Bruce Schneier argues for a more common-sense and less technology-centric approach to both IT security and physical security. In this interview with Computerworld, Schneier shares his views on IT security.
You recently co-wrote the report "CyberInsecurity: The Cost of Monopoly. How the Dominance of Microsoft's Products Poses a Risk to Security." Would you have written it if the world had been standardized around another operating system?
Of course. The problem is not specific to Microsoft; it's a general problem of monocultures. The security risks would be no different if the country standardized on Macintosh System 10 or Linux. The security risks were the same in 1989, when the Morris worm propagated freely in an Internet that standardized on Unix.
Are there benefits to having a homogeneous IT environment that outweigh the potential risks?
In some ways, it's a judgment call. The question is whether you don't put all your eggs in one basket, or you put all your eggs in one basket and guard the basket. In balance, I think that the risks of a monoculture in operating systems outweigh the advantages.
Last year you wrote about the need to fix network security by hacking the business climate. What did you mean?
Network security is plagued by good technical solutions that just don't work. Companies install firewalls but don't configure them properly. Network administrators don't install patches. Software companies don't write secure software. The problem here is not technical, but economic.
What do you mean when you say that secure software is an economic problem?
The economics of security is such that the effects of insecurity are largely an externality—the costs aren't borne by the companies making the security decisions.
The only way we can fix computer security is to fix this economic problem. We need to take the companies in the best position to fix all these security problems—the software manufacturers - and make it in their best interest to do so. For years I've advocated software liability as a way to do this. Once a company like Microsoft is liable for damages as a result of its software vulnerabilities, you can be sure that they'll start taking those vulnerabilities seriously.
But don't users have a responsibility as well?
It's clear that Microsoft doesn't bear 100% of the responsibility for these problems. But it is also clear they don't have a zero percent liability. That is what the courts should decide. Courts do this all the time. How much contributory negligence is each party responsible for?
What's to be done about the patching problem?
There is nothing that can be done. There are too many patches, they don't work very well, and companies can't keep up. Blaming companies for not installing patches is blaming the victim—it's not right, and it's not fair. Software quality needs to improve; patching after the fact no longer works.
Why hasn't technology helped make us physically safer?
Technology hasn't made us safer because safety is not a function of technology. Real security comes from people. Technology is just a security tool. There are lots of examples post-9/11 where [people have assumed] that technology will solve their problems. People think that magic technology will make them safe. That is not the case.
You argue that the focus should not be so much on threat avoidance but on risk management. What do you mean by that?
Security is always a trade-off: What are you getting vs. what are you giving up? Sometimes more security makes sense, and sometimes less security makes sense. When people think about security, they inherently think in terms of this risk management trade-off mentality. It doesn't matter how effective a security system is at avoiding the threat. If a security system does not make business sense, it's not going to be installed.
How can companies move from the threat-avoidance IT security model to risk management?
All it takes is for the CFO to be in charge of security. The last thing you want is for security people to make these sorts of security decisions, because they don't have a broad enough view. You need a financial person to look at the risks, the risk reductions and the costs.
Why is it so hard for companies to get IT security funding these days?
From the point of view of the CEO, the risks aren't very great. It's just not worth spending a lot of money on security. That view is changing as we speak, however.
What's driving that change?
The increasingly public Internet epidemics. It's in the news all the time.
Why are companies having such a hard time measuring the effectiveness of their IT security efforts?
It's hard to measure how effective security is. If no one ever robs your home, does it mean that your home security is good, or does it mean that no one has bothered trying? In some ways, you make your best bet based on houses around you or in your neighborhood or by measuring comparables. The problem is that there is no standard benchmark against which to measure your own security. Even worse, if you have had no successful attacks, you might get your budget slashed because "obviously" there's no need.
What's your position on full disclosure of vulnerabilities?
The only reason that software companies are paying attention to vulnerabilities and issuing patches is because of full disclosure. Before researchers started publishing vulnerabilities publicly, software companies would routinely deny that the vulnerabilities existed. Full disclosure is what's getting them to take security seriously, and it's what's keeping them honest.
Yes, it also helps the bad guys. But the benefits grossly outweigh the disadvantages.