Entries Tagged "malware"

Page 35 of 47

Another Conficker Variant

This is one well-designed piece of malware:

Conficker B++ is somewhat similar to Conficker B, with 294 of 297 sub-routines the same and 39 additional subroutines. The latest variant, first spotted on 16 February, is even more sneaky than its previous incarnations, SRI explains.

Conficker B++ is no longer limited to reinfection by similarly structured Conficker DLLs, but can now push new self-contained Win32 applications. These executables can infiltrate the host using methods that are not detected by the latest anti-Conficker security applications.

[…]

The malware also creates an additional backdoor on compromise machines to create an altogether trickier infectious agent, SRI explains.

In Conficker A and B, there appeared only one method to submit Win32 binaries to the digital signature validation path, and ultimately to the CreateProcess API call. This path required the use of the Internet rendezvous point to download the binary through an HTTP transaction.

Under Conficker B++, two new paths to binary validation and execution have been introduced to Conficker drones, both of which bypass the use of Internet Rendezvous points: an extension to the netapi32.dll patch and the new named pipe backdoor. These changes suggest a desire by the Conficker’s authors to move away from a reliance on Internet rendezvous points to support binary update, and toward a more direct flash approach.

SRI reckons that Conficker-A has infected 4.7m machines, at one time or another, while Conficker-B has hit 6.7m IP addresses. These figures, as with previous estimates, come from an analysis of the number of machines that have ever tried to call into malware update sites. The actual number of infected hosts at any one time is lower than that. SRI estimates the botnet controlled by Conficker-A and Conficker-B is around 1m and 3m hosts, respectively, or a third of the raw estimate.

Posted on February 24, 2009 at 5:23 AMView Comments

Another Password Analysis

Here’s an analysis of 30,000 passwords from phpbb.com, similar to my analysis of 34,000 MySpace passwords:

The striking different between the two incidents is that the phpbb passwords are simpler. MySpace requires that passwords “must be between 6 and 10 characters, and contain at least 1 number or punctuation character.” Most people satisfied this requirement by simply appending “1” to the ends of their passwords. The phpbb site has no such restrictions—the passwords are shorter and rarely contain anything more than a dictionary word.

Seems like we still can’t choose good passwords. Conficker.B exploits this, trying about 200 common passwords to help spread itself.

Posted on February 20, 2009 at 7:31 AMView Comments

Balancing Security and Usability in Authentication

Since January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?

Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won’t be able to—or won’t bother to—use it.

Passwords are the most common authentication system, and a good place to start. They’re very easy to implement and use, which is why they’re so popular. But as computers have become faster, password guessing has become easier. Most people don’t choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.

To combat password guessing, many systems force users to choose harder-to-guess passwords—requiring minimum lengths, non alpha-numeric characters, etc.—and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.

Users forgetting their passwords can be expensive—sysadmins or customer service reps have to field phone calls and reset passwords—so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother’s maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.

A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token—”something you have”—is combined with a password—”something you know.” But it’s less usable, because the tokens have to be purchased and distributed to all users, and far too often it’s “something you lost or forgot.” And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them—sometimes only as an option, because people don’t like them.

In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem—or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn’t survive one week.

Balancing security and usability is hard, and many organizations get it wrong. But it’s also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.

A version of this essay was originally published in The Guardian.

Posted on February 19, 2009 at 1:44 PMView Comments

Computer Virus Epidemiology

WiFi networks and malware epidemiology,” by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani.

Abstract

In densely populated urban areas WiFi routers form a tightly interconnected proximity network that can be exploited as a substrate for the spreading of malware able to launch massive fraudulent attacks. In this article, we consider several scenarios for the deployment of malware that spreads over the wireless channel of major urban areas in the US. We develop an epidemiological model that takes into consideration prevalent security flaws on these routers. The spread of such a contagion is simulated on real-world data for georeferenced wireless routers. We uncover a major weakness of WiFi networks in that most of the simulated scenarios show tens of thousands of routers infected in as little as 2 weeks, with the majority of the infections occurring in the first 24–48 h. We indicate possible containment and prevention measures and provide computational estimates for the rate of encrypted routers that would stop the spreading of the epidemics by placing the system below the percolation threshold.

Honestly, I’m not sure I understood most of the article. And I don’t think that their model is all that great. But I like to see these sorts of methods applied to malware and infection rates.

EDITED TO ADD (3/13): Earlier—but free—version of the paper.

Posted on February 18, 2009 at 5:53 AMView Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PMView Comments

Interview with an Adware Developer

Fascinating:

I should probably first speak about how adware works. Most adware targets Internet Explorer (IE) users because obviously they’re the biggest share of the market. In addition, they tend to be the less-savvy chunk of the market. If you’re using IE, then either you don’t care or you don’t know about all the vulnerabilities that IE has.

IE has a mechanism called a Browser Helper Object (BHO) which is basically a gob of executable code that gets informed of web requests as they’re going. It runs in the actual browser process, which means it can do anything the browser can do—which means basically anything. We would have a Browser Helper Object that actually served the ads, and then we made it so that you had to kill all the instances of the browser to be able to delete the thing. That’s a little bit of persistence right there.

If you also have an installer, a little executable, you can make a Registry entry and every time this thing reboots, the installer will check to make sure the BHO is there. If it is, great. If it isn’t, then it will install it. That’s fine until somebody goes and deletes the executable.

The next thing that Direct Revenue did—actually I should say what I did, because I was pretty heavily involved in this—was make a poller which continuously polls about every 10 seconds or so to see if the BHO was there and alive. If it was, great. If it wasn’t, [ the poller would ] install it. To make sure the poller was less likely to be detected, we developed this algorithm (a really trivial one) for making a random-looking filename that was consistent per machine but was not easy to guess. I think it was the first 6 or 8 characters of the DES-encoded MAC address. You take the MAC address, encode it with DES, take the first six characters and that was it. That was pretty good, except the file itself would be the same binary. If you md5-summed the file it would always be the same everywhere, and it was always in the same location.

Next we made a function shuffler, which would go into an executable, take the functions and randomly shuffle them. Once you do that, then of course the signature’s all messed up. [ We also shuffled ] a lot of the pointers within each actual function. It completely changed the shape of the executable.

We then made a bootstrapper, which was a tiny tiny piece of code written in Assembler which would decrypt the executable in memory, and then just run it. At the same time, we also made a virtual process executable. I’ve never heard of anybody else doing this before. Windows has this thing called Create Remote Thread. Basically, the semantics of Create Remote Thread are: You’re a process, I’m a different process. I call you and say “Hey! I have this bit of code. I’d really like it if you’d run this.” You’d say, “Sure,” because you’re a Windows process—you’re all hippie-like and free love. Windows processes, by the way, are insanely promiscuous. So! We would call a bunch of processes, hand them all a gob of code, and they would all run it. Each process would all know about two of the other ones. This allowed them to set up a ring…mutual support, right?

So we’ve progressed now from having just a Registry key entry, to having an executable, to having a randomly-named executable, to having an executable which is shuffled around a little bit on each machine, to one that’s encrypted—really more just obfuscated—to an executable that doesn’t even run as an executable. It runs merely as a series of threads. Now, those threads can communicate with one another, they would check to make sure that the BHO was there and up, and that the whatever other software we had was also up.

There was one further step that we were going to take but didn’t end up doing, and that is we were going to get rid of threads entirely, and just use interrupt handlers. It turns out that in Windows, you can get access to the interrupt handler pretty easily. In fact, you can register with the OS a chunk of code to handle a given interrupt. Then all you have to do is arrange for an interrupt to happen, and every time that interrupt happens, you wake up, do your stuff and go away. We never got to actually do that, but it was something we were thinking we’d do.

EDITED TO ADD (1/30): Good commentary on the interview, showing how it whitewashes history.

EDITED TO ADD (2/13): Some more commentary.

Posted on January 30, 2009 at 6:19 AMView Comments

The Economics of Spam

Excellent paper on the economics of spam. The authors infiltrated the Storm worm and monitored its doings.

After 26 days, and almost 350 million e-mail messages, only 28 sales resulted—a conversion rate of well under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. Taken together, these conversions would have resulted in revenues of $2,731.88—a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network—we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm’s pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.

Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than “millions of dollars every day,” but certainly a healthy enterprise.

Of course, the authors point out that it’s dangerous to make these sorts of generalizations:

We would be the first to admit that these results represent a single data point and are not necessarily representative of spam as a whole. Different campaigns, using different tactics and marketing different products will undoubtedly produce different outcomes. Indeed, we caution strongly against researchers using the conversion rates we have measured for these Storm-based campaigns to justify assumptions in any other context.

Spam is all about economics. When sending junk mail costs a dollar in paper, list rental, and postage, a marketer needs a reasonable conversion rate to make the campaign worthwhile. When sending junk mail is almost free, a one in ten million conversion rate is acceptable.

News articles.

Posted on November 12, 2008 at 6:52 AMView Comments

"Scareware" Vendors Sued

This is good:

Microsoft Corp. and the state of Washington this week filed lawsuits against a slew of “scareware” purveyors, scam artists who use fake security alerts to frighten consumers into paying for worthless computer security software.

The case filed by the Washington attorney general’s office names Texas-based Branch Software and its owner James Reed McCreary IV, alleging that McCreary’s company caused targeted PCs to pop up misleading security alerts about security threats on the victims’ computers. The alerts warned users that their systems were “damaged and corrupted” and instructed them to visit a Web site to purchase a copy of Registry Cleaner XP for $39.95.

I would have thought that existing scam laws would be enough, but Washington state actually has a specific law about this sort of thing:

The lawsuits were filed under Washington’s Computer Spyware Act, which among other things punishes individuals who prey on user concerns regarding spyware or other threats. Specifically, the law makes it illegal to misrepresent the extent to which software is required for computer security or privacy, and it provides actual damages or statutory damages of $100,000 per violation, whichever is greater.

Posted on October 2, 2008 at 7:03 AMView Comments

News from the Rock Phish Gang

Definitely interesting:

Based in Europe, the Rock Phish group is a criminal collective that has been targeting banks and other financial institutions since 2004. According to RSA, they are responsible for half of the worldwide phishing attacks and have siphoned tens of millions of dollars from individuals’ bank accounts. The group got its name from a now discontinued quirk in which the phishers used directory paths that contained the word “rock.”

The first sign the group was expanding operations came in April, when it introduced a trojan known alternately as Zeus or WSNPOEM, which steals sensitive financial information in transit from a victim’s machine to a bank. Shortly afterward, the gang added more crimeware, including a custom-made botnet client that was spread, among other means, using the Neosploit infection kit.

[…]

Soon, additional signs appeared pointing to a partnership between Rock Phishers and Asprox. Most notably, the command and control server for the custom Rock Phish crimeware had exactly the same directory structure of many of the Asprox servers, leading RSA researchers to believe Rock Phish and Asprox attacks were using at least one common server. (Researchers from Damballa were able to confirm this finding after observing malware samples from each of the respective botnets establish HTTP proxy server connections to a common set of destination IPs.)

Posted on September 10, 2008 at 7:47 AMView Comments

1 33 34 35 36 37 47

Sidebar photo of Bruce Schneier by Joe MacInnis.