Entries Tagged "malware"

Page 41 of 47

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

Malware Distribution Project

In case you needed a comprehensive database of malware.

Malware Distribution Project (MD:Pro) offers developers of security systems and anti-malware products a vast collection of downloadable malware from a secure and reliable source, exclusively for the purposes of analysis, testing, research and development.

Bringing together for the first time a large back-catalogue of malware, computer underground related information and IT security resources under one project, this major new system also contains a large selection of undetected malware, along with an open, collaborative platform, where malware samples can be shared among its members. The database is constantly updated with new files, and maintained to keep it running at an optimum.

There are currently 271712 files in the system.

This isn’t free. You can subscribe at 1,250 euros for a month, or 13,500 euros a year. (There are cheaper packages with less comprehensive access.)

They claim to have a stringent vetting process, ensuring that only legitimate researchers have access to this database:

It should be noted that we are not a malware/VX distribution site, nor do we condone the public spreading and/or distribution of such information, hence we will be vetting our registrants stringently. We do appreciate that this puts a severe restriction on private (individual) malware researchers and enthusiasts with limited or no budget, but we do feel that providing free malware for public research is out of the scope of this project.

EDITED TO ADD (8/8): The hacker group Cult of the Dead Cow also has a malware repository, free and with looser access restrictions.

Posted on August 8, 2006 at 7:56 AMView Comments

Why the Top-Selling Antivirus Programs Aren't the Best

The top three antivirus programs—from Symantec, McAfee, and Trend Micro—are less likely to detect new viruses and worms than less popular programs, because virus writers specifically test their work against those programs:

On Wednesday, the general manager of Australia’s Computer Emergency Response Team (AusCERT), Graham Ingram, described how the threat landscape has changed—along with the skill of malware authors.

“We are getting code of a quality that is probably worthy of software engineers. Not application developers but software engineers,” said Ingram.

However, the actual reason why the top selling antivirus applications don’t work is because malware authors are specifically testing their Trojans and viruses to make sure they can bypass these applications before releasing them in the wild.

It’s interesting to watch the landscape change, as malware becomes less the province of hackers and more the province of criminals. This is one move in a continuous arms race between attacker and defender.

Posted on August 2, 2006 at 6:41 AMView Comments

Updating the Traditional Security Model

On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:

Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)

Piscitello said:

This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend “admissibility” to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.

He’s 100% right.

Posted on August 1, 2006 at 2:03 PMView Comments

Bot Networks

What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.

All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.

The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.

The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.

Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.

These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!

Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.

Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.

And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)

Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.

One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.

There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.

As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it’s primarily targeted against fringe industries—online gaming, online gambling, online porn—located offshore, but we’re seeing more and more of against mainstream companies in the U.S. and Europe.

EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.

Posted on July 27, 2006 at 6:35 AMView Comments

Hacked MySpace Server Infects a Million Computers with Malware

According to The Washington Post:

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows….

Clever attack.

EDITED TO ADD (7/27): It wasn’t MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.

EDITED TO ADD (8/5): Ed Felten comments.

Posted on July 24, 2006 at 6:46 AMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

Hacking Computers Over USB

I’ve previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there’s another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.

Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.

The article has the details, but basically you can configure a file on your USB device to automatically run when it’s plugged into a computer. That file can, of course, do anything you want it to.

Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up—and then steal his passwords and critical files.

And here’s an article about someone who used this trick in a penetration test:

We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.

There is a defense. From the first article:

AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what’s on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the “Shift” key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for “disable autorun”) or you can download and use Microsoft’s TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting “Take no action” for each kind of disk that’s listed. Unfortunately, disabling AutoPlay for CDs won’t always disable AutoPlay for USB devices, so the registry hack is the safest course of action.

In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.

EDITED TO ADD (6/12): In the penetration test, they didn’t use AutoRun.

Posted on June 8, 2006 at 1:34 PMView Comments

1 39 40 41 42 43 47

Sidebar photo of Bruce Schneier by Joe MacInnis.