Entries Tagged "malware"

Page 32 of 49

Yet Another "People Plug in Strange USB Sticks" Story

I’m really getting tired of stories like this:

Computer disks and USB sticks were dropped in parking lots of government buildings and private contractors, and 60% of the people who picked them up plugged the devices into office computers. And if the drive or CD had an official logo on it, 90% were installed.

Of course people plugged in USB sticks and computer disks. It’s like “75% of people who picked up a discarded newspaper on the bus read it.” What else are people supposed to do with them?

And this is not the right response:

Mark Rasch, director of network security and privacy consulting for Falls Church, Virginia-based Computer Sciences Corp., told Bloomberg: “There’s no device known to mankind that will prevent people from being idiots.”

Maybe it would be the right response if 60% of people tried to play the USB sticks like ocarinas, or tried to make omelettes out of the computer disks. But not if they plugged them into their computers. That’s what they’re for.

People get USB sticks all the time. The problem isn’t that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that the OS trusts random USB sticks. The problem is that the OS will automatically run a program that can install malware from a USB stick. The problem is that it isn’t safe to plug a USB stick into a computer.

Quit blaming the victim. They’re just trying to get by.

EDITED TO ADD (7/4): As of February of this year, Windows no longer supports AutoRun for USB drives.

Posted on June 29, 2011 at 9:13 AMView Comments

Aggressive Social Engineering Against Consumers

Cyber criminals are getting aggressive with their social engineering tactics.

Val Christopherson said she received a telephone call last Tuesday from a man stating he was with an online security company who was receiving error messages from the computer at her Charleswood home.

“He said he wanted to fix my problem over the phone,” Christopherson said.

She said she was then convinced to go online to a remote access and support website called Teamviewer.com and allow him to connect her computer to his company’s system.

“That was my big mistake,” Christopherson said.

She said the scammers then tried to sell her anti-virus software they would install.

At that point, the 61-year-old Anglican minister became suspicious and eventually broke off the call before unplugging her computer.

Christopherson said she then had to hang up on the same scam artist again, after he quickly called back claiming to be the previous caller’s manager.

Posted on May 30, 2011 at 6:58 AMView Comments

Blackhole Exploit Kit

It’s now available as a free download:

A free version of the Blackhole exploit kit has appeared online in a development that radically reduces the entry-level costs of getting into cybercrime.

The Blackhole exploit kit, which up until now would cost around $1,500 for an annual licence, creates a handy way to plant malicious scripts on compromised websites. Surfers visiting legitimate sites can be redirected using these scripts to scareware portals on sites designed to exploit browser vulnerabilities in order to distribute banking Trojans, such as those created from the ZeuS toolkit.

Posted on May 25, 2011 at 11:55 AMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

Forged Memory

A scary development in rootkits:

Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).

For example rootkits may hide files, registries, processes, etc., from detection software. So rootkits typically modify memory. And anti-rootkit tools inspect memory areas to identify such suspicious modifications and alarm users.

This particular rootkit also modifies a memory location (installs a hook) to prevent proper disk access by detection software. Let us say that location is X. It is noteworthy that this location X is well known for being modified by other rootkit families, and is not unique to this particular rootkit.

Now since the content at location X is known to be altered by rootkits in general, most anti-rootkit tools will inspect the content at memory location X to see if it has been modified.

[…]

In the case of this particular rootkit, the original (what’s expected) content at location X is moved by the rootkit to a different location, Y. When an anti-rootkit tool tries to read the contents at location X, it is served contents from location Y. So, the anti-rootkit tool thinking everything is as it should be, does not warn the user of suspicious activity.

Posted on May 6, 2011 at 12:32 PMView Comments

Hijacking the Coreflood Botnet

Earlier this month, the FBI seized control of the Coreflood botnet and shut it down:

According to the filing, ISC, under law enforcement supervision, planned to replace the servers with servers that it controlled, then collect the IP addresses of all infected machines communicating with the criminal servers, and send a remote “stop” command to infected machines to disable the Coreflood malware operating on them.

This is a big deal; it’s the first time the FBI has done something like this. My guess is that we’re going to see a lot more of this sort of thing in the future; it’s the obvious solution for botnets.

Not that the approach is without risks:

“Even if we could absolutely be sure that all of the infected Coreflood botnet machines were running the exact code that we reverse-engineered and convinced ourselves that we understood,” said Chris Palmer, technology director for the Electronic Frontier Foundation, “this would still be an extremely sketchy action to take. It’s other people’s computers and you don’t know what’s going to happen for sure. You might blow up some important machine.”

I just don’t see this argument convincing very many people. Leaving Coreflood in place could blow up some important machine. And leaving Coreflood in place not only puts the infected computers at risk; it puts the whole Internet at risk. Minimizing the collateral damage is important, but this feels like a place where the interest of the Internet as a whole trumps the interest of those affected by shutting down Coreflood.

The problem as I see it is the slippery slope. Because next, the RIAA is going to want to remotely disable computers they feel are engaged in illegal file sharing. And the FBI is going to want to remotely disable computers they feel are encouraging terrorism. And so on. It’s important to have serious legal controls on this counterattack sort of defense.

Some more commentary.

Posted on May 2, 2011 at 6:52 AMView Comments

1 30 31 32 33 34 49

Sidebar photo of Bruce Schneier by Joe MacInnis.