Entries Tagged "antivirus"

Page 2 of 5

BIOS Hacking

We’ve learned a lot about the NSA’s abilities to hack a computer’s BIOS so that the hack survives reinstalling the OS. Now we have a research presentation about it.

From Wired:

The BIOS boots a computer and helps load the operating system. By infecting this core software, which operates below antivirus and other security products and therefore is not usually scanned by them, spies can plant malware that remains live and undetected even if the computer’s operating system were wiped and re-installed.

[…]

Although most BIOS have protections to prevent unauthorized modifications, the researchers were able to bypass these to reflash the BIOS and implant their malicious code.

[…]

Because many BIOS share some of the same code, they were able to uncover vulnerabilities in 80 percent of the PCs they examined, including ones from Dell, Lenovo and HP. The vulnerabilities, which they’re calling incursion vulnerabilities, were so easy to find that they wrote a script to automate the process and eventually stopped counting the vulns it uncovered because there were too many.

From ThreatPost:

Kallenberg said an attacker would need to already have remote access to a compromised computer in order to execute the implant and elevate privileges on the machine through the hardware. Their exploit turns down existing protections in place to prevent re-flashing of the firmware, enabling the implant to be inserted and executed.

The devious part of their exploit is that they’ve found a way to insert their agent into System Management Mode, which is used by firmware and runs separately from the operating system, managing various hardware controls. System Management Mode also has access to memory, which puts supposedly secure operating systems such as Tails in the line of fire of the implant.

From the Register:

“Because almost no one patches their BIOSes, almost every BIOS in the wild is affected by at least one vulnerability, and can be infected,” Kopvah says.

“The high amount of code reuse across UEFI BIOSes means that BIOS infection can be automatic and reliable.

“The point is less about how vendors don’t fix the problems, and more how the vendors’ fixes are going un-applied by users, corporations, and governments.”

From Forbes:

Though such “voodoo” hacking will likely remain a tool in the arsenal of intelligence and military agencies, it’s getting easier, Kallenberg and Kovah believe. This is in part due to the widespread adoption of UEFI, a framework that makes it easier for the vendors along the manufacturing chain to add modules and tinker with the code. That’s proven useful for the good guys, but also made it simpler for researchers to inspect the BIOS, find holes and create tools that find problems, allowing Kallenberg and Kovah to show off exploits across different PCs. In the demo to FORBES, an HP PC was used to carry out an attack on an ASUS machine. Kovah claimed that in tests across different PCs, he was able to find and exploit BIOS vulnerabilities across 80 per cent of machines he had access to and he could find flaws in the remaining 10 per cent.

“There are protections in place that are supposed to prevent you from flashing the BIOS and we’ve essentially automated a way to find vulnerabilities in this process to allow us to bypass them. It turns out bypassing the protections is pretty easy as well,” added Kallenberg.

The NSA has a term for vulnerabilities it think are exclusive to it: NOBUS, for “nobody but us.” Turns out that NOBUS is a flawed concept. As I keep saying: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” By continuing to exploit these vulnerabilities rather than fixing them, the NSA is keeping us all vulnerable.

Two Slashdot threads. Hacker News thread. Reddit thread.

EDITED TO ADD (3/31): Slides from the CanSecWest presentation. The bottom line is that there are some pretty huge BIOS insecurities out there. We as a community and industry need to figure out how to regularly patch our BIOSes.

Posted on March 23, 2015 at 7:07 AMView Comments

Is Antivirus Dead?

Symantec declared anti-virus dead, and Brian Krebs writes a good response.

He’s right: antivirus won’t protect you from the ever-increasing percentage of malware that’s specifically designed to bypass antivirus software, but it will protect you from all the random unsophisticated attacks out there: the “background radiation” of the Internet.

Posted on May 15, 2014 at 1:18 PMView Comments

Postmortem: NSA Exploits of the Day

When I decided to post an exploit a day from the TAO implant catalog, my goal was to highlight the myriad of capabilities of the NSA’s Tailored Access Operations group, basically, its black bag teams. The catalog was published by Der Spiegel along with a pair of articles on the NSA’s CNE—that’s Computer Network Exploitation—operations, and it was just too much to digest. While the various nations’ counterespionage groups certainly pored over the details, they largely washed over us in the academic and commercial communities. By republishing a single exploit a day, I hoped we would all read and digest each individual TAO capability.

It’s important that we know the details of these attack tools. Not because we want to evade the NSA—although some of us do—but because the NSA doesn’t have a monopoly on either technology or cleverness. The NSA might have a larger budget than every other intelligence agency in the world combined, but these tools are the sorts of things that any well-funded nation-state adversary would use. And as technology advances, they are the sorts of tools we’re going to see cybercriminals use. So think of this less as what the NSA does, and more of a head start as to what everyone will be using.

Which means we need to figure out how to defend against them.

The NSA has put a lot of effort into designing software implants that evade antivirus and other detection tools, transmit data when they know they can’t be detected, and survive reinstallation of the operating system. It has software implants designed to jump air gaps without being detected. It has an impressive array of hardware implants, also designed to evade detection. And it spends a lot of effort on hacking routers and switches. These sorts of observations should become a road map for anti-malware companies.

Anyone else have observations or comments, now that we’ve seen the entire catalog?

The TAO catalog isn’t current; it’s from 2008. So the NSA has had six years to improve all of the tools in this catalog, and to add a bunch more. Figuring out how to extrapolate to current capabilities is also important.

Posted on March 12, 2014 at 6:31 AMView Comments

How Antivirus Companies Handle State-Sponsored Malware

Since we learned that the NSA has surreptitiously weakened Internet security so it could more easily eavesdrop, we’ve been wondering if it’s done anything to antivirus products. Given that it engages in offensive cyberattacks—and launches cyberweapons like Stuxnet and Flame—it’s reasonable to assume that it’s asked antivirus companies to ignore its malware. (We know that antivirus companies have previously done this for corporate malware.)

My guess is that the NSA has not done this, nor has any other government intelligence or law enforcement agency. My reasoning is that antivirus is a very international industry, and while a government might get its own companies to play along, it would not be able to influence international companies. So while the NSA could certainly pressure McAfee or Symantec—both Silicon Valley companies—to ignore NSA malware, it could not similarly pressure Kaspersky Labs (Russian), F-Secure (Finnish), or AVAST (Czech). And the governments of Russia, Finland, and the Czech Republic will have comparable problems.

Even so, I joined a group of security experts to ask antivirus companies explicitly if they were ignoring malware at the behest of a government. Understanding that the companies could certainly lie, this is the response so far: no one has admitted to doing so.

Up until this moment, only a handful of the vendors have replied ESET, F-Secure, Norman Shark, Kaspersky, Panda and Trend Micro. All of the responding companies have confirmed the detection of state sponsored malware, e.g. R2D2 and FinFisher. Furthermore, they claim they have never received a request to not detect malware. And if they were asked by any government to do so in the future, they said they would not comply. All the aforementioned companies believe there is no such thing as harmless malware.

Posted on December 2, 2013 at 6:05 AMView Comments

The Failure of Anti-Virus Companies to Catch Military Malware

Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame:

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.

It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.

Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren’t meant to be discovered.

His conclusion is simply that the attackers—in this case, military intelligence agencies—are simply better than commercial-grade anti-virus programs.

The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected. They have unlimited time to perfect their attacks. It’s not a fair war between the attackers and the defenders when the attackers have access to our weapons.

We really should have been able to do better. But we didn’t. We were out of our league, in our own game.

I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.

I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand—and then write signatures to detect—the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.

EDITED TO ADD (6/23): F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.

Posted on June 19, 2012 at 7:11 AMView Comments

Stealing Source Code

Hackers stole some source code to Symantec’s products. We don’t know what was stolen or how recent the code is—the company is, of course, minimizing the story—but it’s hard to get worked up about this. Yes, maybe the bad guys will comb the code looking for vulnerabilities, and maybe there’s some smoking gun that proves Symantec’s involvement in something sinister, but most likely Symantec’s biggest problem is public embarrassment.

Posted on January 9, 2012 at 12:55 PMView Comments

Aggressive Social Engineering Against Consumers

Cyber criminals are getting aggressive with their social engineering tactics.

Val Christopherson said she received a telephone call last Tuesday from a man stating he was with an online security company who was receiving error messages from the computer at her Charleswood home.

“He said he wanted to fix my problem over the phone,” Christopherson said.

She said she was then convinced to go online to a remote access and support website called Teamviewer.com and allow him to connect her computer to his company’s system.

“That was my big mistake,” Christopherson said.

She said the scammers then tried to sell her anti-virus software they would install.

At that point, the 61-year-old Anglican minister became suspicious and eventually broke off the call before unplugging her computer.

Christopherson said she then had to hang up on the same scam artist again, after he quickly called back claiming to be the previous caller’s manager.

Posted on May 30, 2011 at 6:58 AMView Comments

Forged Memory

A scary development in rootkits:

Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).

For example rootkits may hide files, registries, processes, etc., from detection software. So rootkits typically modify memory. And anti-rootkit tools inspect memory areas to identify such suspicious modifications and alarm users.

This particular rootkit also modifies a memory location (installs a hook) to prevent proper disk access by detection software. Let us say that location is X. It is noteworthy that this location X is well known for being modified by other rootkit families, and is not unique to this particular rootkit.

Now since the content at location X is known to be altered by rootkits in general, most anti-rootkit tools will inspect the content at memory location X to see if it has been modified.

[…]

In the case of this particular rootkit, the original (what’s expected) content at location X is moved by the rootkit to a different location, Y. When an anti-rootkit tool tries to read the contents at location X, it is served contents from location Y. So, the anti-rootkit tool thinking everything is as it should be, does not warn the user of suspicious activity.

Posted on May 6, 2011 at 12:32 PMView Comments

Scareware: How Crime Pays

Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.

Here’s one scareware operator who sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.

Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.

Posted on February 7, 2011 at 8:45 AMView Comments

Whitelisting vs. Blacklisting

The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier—although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t—but because it’s a security system that can be implemented automatically, without people.

To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model—a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox—although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?

National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.

Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist—the software can do it largely for free.

Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.

Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?

Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.

For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back—perhaps with a whitelist we maintain personally, but more probably with a blacklist.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.

Posted on January 28, 2011 at 5:02 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.