Entries Tagged "vulnerabilities"

Page 41 of 45

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other—stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install—at least Microsoft Windows system patches—large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Oracle's Password Hashing

Here’s a paper on Oracle’s password hashing algorithm. It isn’t very good.

In this paper the authors examine the mechanism used in Oracle databases for protecting users’ passwords. We review the algorithm used for generating password hashes, and show that the current mechanism presents a number of weaknesses, making it straightforward for an attacker with limited resources to recover a user’s plaintext password from the hashed value. We also describe how to implement a password recovery tool using off-the-shelf software. We conclude by discussing some possible attack vectors and recommendations to mitigate this risk.

Posted on November 3, 2005 at 1:20 PMView Comments

Liabilities and Software Vulnerabilities

My fourth column for Wired discusses liability for software vulnerabilities. Howard Schmidt argued that individual programmers should be liable for vulnerabilities in their code. (There’s a Slashdot thread on Schmidt’s comments.) I say that it should be the software vendors that should be liable, not the individual programmers.

Click on the essay for the whole argument, but here’s the critical point:

If end users can sue software manufacturers for product defects, then the cost of those defects to the software manufacturers rises. Manufacturers are now paying the true economic cost for poor software, and not just a piece of it. So when they’re balancing the cost of making their software secure versus the cost of leaving their software insecure, there are more costs on the latter side. This will provide an incentive for them to make their software more secure.

To be sure, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security-services companies, direct and indirect costs of losses. Making software manufacturers liable moves those costs around, and as a byproduct causes the quality of software to improve.

This is why Schmidt’s idea won’t work. He wants individual software developers to be liable, and not the corporations. This will certainly give pissed-off users someone to sue, but it won’t reduce the externality and it won’t result in more-secure software.

EDITED TO ADD: Dan Farber has a good commentary on my essay. He says I got Schmidt wrong, that Schmidt wants programmers to be accountable but not liable. Be that as it may, I still think that making software vendors liable is a good idea.

There has been some confusion about this in the comments, that somehow this means that software vendors will be expected to achieve perfection and that they will be 100% liable for anything short of that. Clearly that’s ridiculous, and that’s not the way liabilities work. But equally ridiculous is the notion that software vendors should be 0% liable for defects. Somewhere in the middle there is a reasonable amount of liablity, and that’s what I want the courts to figure out.

EDITED TO ADD: Howard Schmidt writes: “It is unfortunate that my comments were reported inaccurately; at least Dan Farber has been trying to correct the inaccurate reports with his blog. I do not support PERSONAL LIABILITY for the developers NOR do I support liability against vendors. Vendors are nothing more then people (employees included) and anything against them hurts the very people who need to be given better tools, training and support.”

Howard wrote an essay on the topic.

Posted on October 20, 2005 at 5:19 AMView Comments

The Keys to the Sydney Subway

Global secrets are generally considered poor security. The problems are twofold. One, you cannot apply any granularity to the security system; someone either knows the secret or does not. And two, global secrets are brittle. They fail badly; if the secret gets out, then the bad guys have a pretty powerful secret.

This is the situation right now in Sydney, where someone stole the master key that gives access to every train in the metropolitan area, and also starts them.

Unfortunately, this isn’t a thief who got lucky. It happened twice, and it’s possible that the keys were the target:

The keys, each of which could start every train, were taken in separate robberies within hours of each other from the North Shore Line although police believed the thefts were unrelated, a RailCorp spokeswoman said.

The first incident occurred at Gordon station when the driver of an empty train was robbed of the keys by two balaclava-clad men shortly after midnight on Sunday morning.

The second theft took place at Waverton Station on Sunday night when a driver was robbed of a bag, which contained the keys, she said.

So, what can someone do with the master key to the Sydney subway? It’s more likely a criminal than a terrorist, but even so it’s definitely a serious issue:

A spokesman for RailCorp told the paper it was taking the matter “very seriously,” but would not change the locks on its trains.

Instead, as of Sunday night, it had increased security around its sidings, with more patrols by private security guards and transit officers.

The spokesman said a “range of security measures” meant a train could not be stolen, even with the keys.

I don’t know if RailCorp should change the locks. I don’t know the risk: whether that “range of security measures” only protects against train theft—an unlikely scenario, if you ask me—or other potential scenarios as well. And I don’t know how expensive it would be to change the locks.

Another problem with global secrets is that it’s expensive to recover from a security failure.

And this certainly isn’t the first time a master key fell into the wrong hands:

Mr Graham said there was no point changing any of the metropolitan railway key locks.

“We could change locks once a week but I don’t think it reduces in any way the security threat as such because there are 2000 of these particular keys on issue to operational staff across the network and that is always going to be, I think, an issue.”

A final problem with global secrets is that it’s simply too easy to lose control of them.

Moral: Don’t rely on global secrets.

Posted on September 1, 2005 at 8:06 AMView Comments

Bluetooth Spam

Advertisers are beaming unwanted content to Bluetooth phones at a distance of 100 meters.

Sure, it’s annoying, but worse, there are serious security risks. Don’t believe this:

Furthermore, there is no risk of downloading viruses or other malware to the phone, says O’Regan: “We don’t send applications or executable code.” The system uses the phone’s native download interface so they should be able to see the kind of file they are downloading before accepting it, he adds.

This company might not send executable code, but someone else certainly could. And what percentage of people who use Bluetooth phones can recognize “the kind of file they are downloading”?

We’ve already seen two ways to steal data from Bluetooth devices. And we know that more and more sensitive data is being stored on these small devices, increasing the risk. This is almost certainly another avenue for attack.

Posted on August 23, 2005 at 12:24 PMView Comments

1 39 40 41 42 43 45

Sidebar photo of Bruce Schneier by Joe MacInnis.