Entries Tagged "debates"

Page 5 of 5

Security in the Cloud

One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails. An example is a firewall coupled with an intrusion-detection system (IDS). Defense in depth provides security, because there’s no single point of failure and no assumed single vector for attacks.

It is for this reason that a choice between implementing network security in the middle of the network—in the cloud—or at the endpoints is a false dichotomy. No single security system is a panacea, and it’s far better to do both.

This kind of layered security is precisely what we’re seeing develop. Traditionally, security was implemented at the endpoints, because that’s what the user controlled. An organization had no choice but to put its firewalls, IDSs, and anti-virus software inside its network. Today, with the rise of managed security services and other outsourced network services, additional security can be provided inside the cloud.

I’m all in favor of security in the cloud. If we could build a new Internet today from scratch, we would embed a lot of security functionality in the cloud. But even that wouldn’t substitute for security at the endpoints. Defense in depth beats a single point of failure, and security in the cloud is only part of a layered approach.

For example, consider the various network-based e-mail filtering services available. They do a great job of filtering out spam and viruses, but it would be folly to consider them a substitute for anti-virus security on the desktop. Many e-mails are internal only, never entering the cloud at all. Worse, an attacker might open up a message gateway inside the enterprise’s infrastructure. Smart organizations build defense in depth: e-mail filtering inside the cloud plus anti-virus on the desktop.

The same reasoning applies to network-based firewalls and intrusion-prevention systems (IPS). Security would be vastly improved if the major carriers implemented cloud-based solutions, but they’re no substitute for traditional firewalls, IDSs, and IPSs.

This should not be an either/or decision. At Counterpane, for example, we offer cloud services and more traditional network and desktop services. The real trick is making everything work together.

Security is about technology, people, and processes. Regardless of where your security systems are, they’re not going to work unless human experts are paying attention. Real-time monitoring and response is what’s most important; where the equipment goes is secondary.

Security is always a trade-off. Budgets are limited and economic considerations regularly trump security concerns. Traditional security products and services are centered on the internal network, because that’s the target of attack. Compliance focuses on that for the same reason. Security in the cloud is a good addition, but it’s not a replacement for more traditional network and desktop security.

This was published as a “Face-Off” in Network World.

The opposing view is here.

Posted on February 15, 2006 at 8:18 AMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

More on Two-Factor Authentication

Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft. For example, issuing tokens to online banking customers won’t reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true. It’s simply a matter of understanding the threats and the attacks.

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

Two-factor authentication is a long-overdue solution to the problem of passwords. I welcome its increasing popularity, but identity theft and bank fraud are not results of password problems; they stem from poorly authenticated transactions. The sooner people realize that, the sooner they’ll stop advocating stronger authentication measures and the sooner security will actually improve.

This essay previously appeared in Network World as a “Face Off.” Joe Uniejewski of RSA Security wrote an opposing position. Another article on the subject was published at SearchSecurity.com.

One way to think about this—a phrasing I didn’t think about until after writing the above essay—is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Posted on April 12, 2005 at 11:02 AMView Comments

Regulation, Liability, and Computer Security

For a couple of years I have been arguing that liability is a way to solve the economic problems underlying our computer security problems. At the RSA conference this year, I was on a panel on that very topic.

This essay argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.

Definitely worth thinking about some more.

Posted on February 25, 2005 at 8:00 AMView Comments

Technology and Counterterrorism

Technology makes us safer.

Communications technologies ensure that emergency response personnel can communicate with each other in an emergency—whether police, fire or medical. Bomb-sniffing machines now routinely scan airplane baggage. Other technologies may someday detect contaminants in our water supply or our atmosphere.

Throughout law enforcement and intelligence investigation, different technologies are being harnessed for the good of defense. However, technologies designed to secure specific targets have a limited value.

By its very nature, defense against terrorism means we must be prepared for anything. This makes it expensive—if not nearly impossible—to deploy threat-specific technological advances at all the places where they’re likely needed. So while it’s good to have bomb-detection devices in airports and bioweapon detectors in crowded subways, defensive technology cannot be applied at every conceivable target for every conceivable threat. If we spent billions of dollars securing airports and the terrorists shifted their attacks to shopping malls, we wouldn’t gain any security as a society.

It’s far more effective to try and mitigate the general threat. For example, technologies that improve intelligence gathering and analysis could help federal agents quickly chase down information about suspected terrorists. The technologies could help agents more rapidly uncover terrorist plots of any type and aimed at any target, from nuclear plants to the food supply. In addition, technologies that foster communication, coordination and emergency response could reduce the effects of a terrorist attack, regardless of what form the attack takes. We get the most value for our security dollar when we can leverage technology to extend the capabilities of humans.

Just as terrorists can use technology more or less wisely, we as defenders can do the same. It is only by keeping in mind the strengths and limitations of technology that we can increase our security without wasting money, freedoms or civil liberties, and without making ourselves more vulnerable to other threats. Security is a trade-off, and it is important that we use technologies that enable us to make better trade-offs and not worse ones.

Originally published on CNet

Posted on October 20, 2004 at 4:35 PMView Comments

1 3 4 5

Sidebar photo of Bruce Schneier by Joe MacInnis.