Google Detects Malware in its Search Data

This is interesting:

As we work to protect our users and their information, we sometimes discover unusual patterns of activity. Recently, we found some unusual search traffic while performing routine maintenance on one of our data centers. After collaborating with security engineers at several companies that were sending this modified traffic, we determined that the computers exhibiting this behavior were infected with a particular strain of malicious software, or “malware.” As a result of this discovery, today some people will see a prominent notification at the top of their Google web search results….

There’s a lot that Google sees as a result of it’s unique and prominent position in the Internet. Some of it is going to be stuff they never considered. And while they use a lot of it to make money, it’s good of them to give this one back to the Internet users.

Posted on July 20, 2011 at 6:23 AM46 Comments

Comments

GreenSquirrel July 20, 2011 7:26 AM

In principle, yes it is good.

It is a touch concerning as to how much of “our” data they are able to analyse but that is a whole different set of discussions.

In this current incident, it is a shame that Google hasnt given more information as to what the infection is and, a bit more worrying, do we want to encourage users to follow a big warning banner saying “Your computer is infected, click here to fix it?”

If I saw that, my first assumption would be to ignore the scareware as it is more likely to be an attempted attack than real so I would ignore Google. If they then educate users to follow the link and “install the software” it will undo a decades worth of security education

Danny Moules July 20, 2011 7:39 AM

So now people trying to sell fake anti-virus products don’t even need to accost you with those irritating products. They simply wait for you visit google.com and inject a similar message into the page you’re viewing – you trust it because it’s Google and you know Google do this. You and your money are soon parted.

I’m not sure this is such a great victory for the web in the long-term.

SparkyGSX July 20, 2011 8:05 AM

The question is, then, what Google should have done. Apparently, they have the ability to detect the search pattern a specific type of malware produces.

I can’t see how not using this ability would help anyone, and I think it’s a good thing Google is trying to help the people who have this particular piece of malware on their system, by alerting them, and possibly directing them to a method of clearing the malware.

We shouldn’t forget, that it’s also in Googles best interest to eliminate this malware, because it’s generating traffic and a server load on their part, for which there is no advertising income.

The question remains, how Google should alert people to the problem, in such a way as not to give all “you’re infected!” banners credibility.

I think this boils down to a basic limitation of the internet, and most other mass media; average users don’t have any way to determine the credibility of any information they find.

@Danny Moules: to be able to do that, the machine must already be infected, or the scammer must have some other way to inject the message into a page trusted by the user. It’s not entirely unthinkable, of course.

Danny Moules July 20, 2011 8:10 AM

“@Danny Moules: to be able to do that, the machine must already be infected […] It’s not entirely unthinkable, of course.”

It’s essentially the standard digital vector for wringing money out of gullible people nowadays; unfortunately.

Clive Robinson July 20, 2011 8:18 AM

In any hierarchical system the closer to the top you get the more responsability you have for the layers below.

Also the closer to the top the more likely you are to be attacked by those wishing to assume your position.

However as always with crime the closer you are to being the top the more likley you are to being robbed.

Thus Google in many ways being at the top of the heap have a vastly greater set of worries than other organisations.

The difference from the normal hierarchical situation is that their income model is by no means the norm for such commercial structures.

So as noted above yes it is good they picked up on it but also bad in that the remedy is so clunky and awkward.

Perhaps they should take this on board and develop a new model for “patching” as this is almost certainly not going to be the last time such a malware attack is launched against them.

Dilbert July 20, 2011 8:18 AM

If there are common strings (or perhaps user agents) associated with known malware why not just block it?

GreenSquirrel July 20, 2011 8:33 AM

Sadly, because Google dont seem to have told us what this malware is, it is hard to second guess what options they had to block it.

Should Google act as our anti-virus? Is the risk of education users to click on the scareware warnings outweighed by the benefit of getting some to update their AV – although, I suspect that if they are wilfully surfing without some form of protection this is probably too late for them.

I especially like this helpful note from good old Google:

“You may have been tricked into downloading this software when visiting a site”

(such as google…)

Kevin July 20, 2011 9:20 AM

If a single infected workstation is behind a proxy or NAT gateway, will google display this warning for all visitors who use that same proxy?

If so, this could pose a real headache for corporate help desks where hundreds or thousands of users all share the same visible IP.

Paeniteo July 20, 2011 9:31 AM

@Kevin: “If a single infected workstation is behind a proxy or NAT gateway, will google display this warning for all visitors who use that same proxy?”

I wouldn’t think so.
If a search query matches their malware-detector, the result page for this search query will contain a warning.

Unfortunately, the malware could trivially be improved to remove the warning from the result page… cat&mouse…

Dave Polaschek July 20, 2011 10:48 AM

Is it just me, or does anyone else have a problem with believing that googleonlinesecurity.blogspot.com is really put out by google? Because it says so on the site? Or that googleblog.blogspot.com is really the official google blog? There don’t seem to be any links to either of them from google.com itself.

Yeah, I’ve heard that they’re real from a google employee, but why should you trust me? It’s sad that even big companies that should know better are so lackadaisical about offering users some way of knowing that what they’re doing is genuine.

RH July 20, 2011 10:49 AM

@Matthias Urlichs: not quite. If the malware can read the browser’s memory, it can intercept the message. All encrypted.google.com can guarentee is that, upon leaving your computer, it is uninterceptable until it gets to google.

Dilbert July 20, 2011 10:57 AM

@Dave Poloschek,

Both googleonlinesecurity.blogspot.com and googleonlinesecurity.blogspot.com have this cross-posted, and they reference each other in the post. So, although you had a valid concern, it seems to be legit.

Eric Grunin July 20, 2011 11:01 AM

@Dave: Google owns Blogger (including blogspot.com), so googleonlinesecurity.blogspot.com is a relatively trustworthy name.

karrde July 20, 2011 2:40 PM

@Eric,

If Google has not set up the ‘googleonlinesecurity’ site on blogspot, I suspect that I could do so.

@All, other sources (seen on Slashdot) seem to indicate that the malware set itself as a local proxy for the web, to implement a MITM attack on the search process. The search results were doctored to bring in some ad-laden (or otherwise unwanted) responses.

As to what to do when it filters out Google’s warnings…that problem is impossible for Google to solve completely.

Somewhat regained trust in Schneier July 20, 2011 4:53 PM

Google seems to me to be such a behemoth..I guess let us hope they use their enormous power and influence for “good”, whatever that means…”Don’t be evil” lol

Damian Menscher July 20, 2011 5:30 PM

Some quick comments to the points raised here:
– No personal user data was used to discover this or produce the warnings, only aggregated data from the millions of infected machines.
– Using https://encrypted.google.com/ will help, but leaves an infected machine behind. I’d rather fix the root problem than hack around it.
– We’re intentionally NOT telling anyone to download anything. We tried to strike a balance between informing people without teaching them bad habits.
– If you have two machines behind NAT, only the infected machine will see the warning. And if you clean the infected machine, the warning will go away immediately.
– We recognize that the proxies could strip the warnings. If they do, we have other ideas for how to continue to inform users.

We knew displaying the warnings would be controversial. But let’s keep the conversation productive: If you had the opportunity to identify a million infected machines, and display a message directly to the machine’s owner, what would you have done?

Dirk Praet July 20, 2011 5:55 PM

Interesting, but I guess that it is reasonably safe to assume that most people running Windows and not applying proper digital hygiene are infected with one or more malwares. Out of those – in my personal experience -, many don’t give a damn as long as it’s not causing crashes, reboots or slow-downs and until such a time that their bank account has been plundered or their Visa card charged by someone in Belarus.

Almost every day, I receive spam and phishing stuff from friends and acquintances whose machines have been zombified. I usually end up blacklisting their addresses as most wil apologise, but nearly always are either unwilling or unable to properly disinfect their machine. I doubt that warnings from Google or anybody else will change that behaviour.

Dave Polaschek July 20, 2011 7:33 PM

@Dilbert & @Eric,

As @karrde points out, if google hadn’t already set up googleonlinesecurity.blogspot.com and googleblog.blogspot.com I probably would have been able to.

Whether it’s cross-posted between two blogs I can’t verify or not really doesn’t add to the security or verifiability of the blogs. They could just as easily both be (fairly clever and labor-intensive at this point) fakes for all I can see from where I sit.

@Damien, I think it’s a good warning, but I wish that if it the posting about it were official, it had been announced on a google website, rather than on a subdomain of blogspot.com that’s easily faked.

I see it as the same sort of cluelessness that has Adobe (I work for Adobe) sending out marketing emails from adobeinfo.com which is owned by some other company that does marketing for Adobe. It’s rather amusing having emails from the company I work for getting flagged as spam, but there you have it. I’m clearly in the paranoid minority.

Richard Steven Hack July 20, 2011 8:02 PM

Dirk: The reason people don’t clean their machines is because they can’t. While most of the better anti-malware suites CAN get rid of ninety percent of malware off someone’s particular PC, the other ten percent has to be rooted out by other utilities run by someone who knows what they’re doing.

In most cases, you need a tech like me to come in and do a thorough cleaning. On average this is going to take at least four hours, usually more when you add in making sure Windows is patched installation of better anti-malware programs to protect the user in the future.

Which means it’s going to cost the end user at least $100 (at my cheap home user rates) and maybe as high as $400 depending on what the tech charges.

Of course, the user can reinstall his OS – IF they have backed up their data (good luck with that) and IF they have a reinstall partition (and know what that is) or a reinstall CD (good luck with that these days) and IF they want to spend at least four hours reinstalling, re-updating and re-installing all their apps. Or pay Geek Squad a couple hundred dollars for this.

My impression from my clients is that they only call when the machine won’t boot or is so slow as to be unusable or the popup ads are driving them crazy or a fake AV has scared them.

A slow machine usually means they have literally scores or hundreds of malware on the system. I have removed as many as 900 pieces of malware from a single PC! And in one case that was AFTER someone else had removed an initial 900 pieces of malware! (That client had gone on vacation for months and her roommate left the PC sitting on a porn site! 🙂 )

tommy July 20, 2011 10:27 PM

“As we work to protect our users and their information…” should read “As we data-mine our users’ information”. It’s hard to see how a routine search request could tell them that it came from an infected machine, but the intro is still the usual hypocrisy and euphemism.

The “small number” suggests not a DoS attack. The crackers seem so far to have little to gain, though we have little information to go on – unless the user didn’t generate the request, but rather a malware in the machine originated the request on its own. Then there is a payoff in getting your own page searched more times, clicked more times, and rising in SE rankings.

“performing routine maintenance on one of our data centers.” Ah, yes, the common euphemism for “snooping”.

Try https://https://ssl.scroogle.org/ instead.

In late 2009, I discovered a malware in someone’s machine that did the opposite: Whenever the location bar contained Google, Ask, Yahoo, or Bing, it would be redirected to a site in Asia. We don’t know what the payload was supposed to be there, except perhaps page hits. NoScript prevented the evil site from running any script on those who landed there.

This report made it into SANS Internet Storm Center (yes, I’m the “Tom” referred to):

http://isc.sans.org/diary.html?storyid=7765

Richard Steven Hack July 21, 2011 12:48 AM

Tommy: “It’s hard to see how a routine search request could tell them that it came from an infected machine”

As I read it, they disconnected one of their data centers for maintenance, which meant no search requests should have ended up there from their director. But they noticed a stream of requests.

Brian Krebs interviewed the Google engineer who noticed this:

“Google security engineer Damian Menscher said he discovered the monster network of hacked machines while conducting routine maintenance at a Google data center. Menscher said when Google takes a data center off-line, search traffic directed to that center is temporarily stopped. Unexpectedly, Menscher found that a data center recently taken off-line was still receiving thousands of requests per second.

Menscher dug further and discovered the source of the traffic: more than a million Microsoft Windows machines were infected with a strain of malware designed to hijack results when users search for keywords at Google.com and other major search engines. Ironically, the traffic wasn’t search traffic at all: The malware instructed host PCs to periodically ping a specific Google Internet address to check whether the systems were online.”

So the malware gave itself away by pinging a Google address that was out of the search queue. Then they figured out that it was all coming in from specific proxies.

They say the malware involved, which is unnamed but probably one of these fake AVs, has a signature they can detect. They haven’t said what that is, however, but presumably it’s the proxy IP or something like that.

S July 21, 2011 3:19 AM

This is a terrible idea that hasn’t been nearly well enough thought through.

I can see that it’s a neat technical trick, and it’s interesting from an intellectual point of view, but for those of us who have to support people who know very little about computers (i.e. most people), it’s going to be a nightmare.

I’m sure I’m not the only person that’s spent the last few years trying to hammer the message into people’s heads that ‘if you’re surfing the internet and something pops up saying you have a virus, IT IS A LIE. Your web browser does not detect viruses. Do not believe it, and do not click on it. Ring me if you’re unsure.’

Now we have to try and educate them as to this new paradigm? Anything that pops up that says you’ve got a virus is a lie, except if it’s a yellow box on Google you can trust it – remember, these are people who would struggle to know the difference between Google and the internet in the first place. And that doesn’t touch upon the next problem, which is when the scammers start emulating said yellow box.

/is VERY glad he doesn’t work in desktop support any more, and only looks after family and a few select friends…

Amina Imam Abubakar July 21, 2011 6:21 AM

sir,
pls am interested in studying security in social networking applications,where to start and the algorithm to use.Am currently undergoing my msc. in comp. sci. at udus nigeria,any help in reframming my topic is welcome

GreenSquirrel July 21, 2011 6:58 AM

This blog doesnt half attract some weird comments….

So, @Amina, if you want some help then I am sure you will be back here reading the thread – – can you give me some more details on your current topic?

(its ok, I wont hold my breath waiting)

Danny Moules July 21, 2011 7:14 AM

“If you had the opportunity to identify a million infected machines, and display a message directly to the machine’s owner, what would you have done?”

Not include a link that could be spoofed by people with malicious intentions. Just thinking aloud.

Indeed, if I were to include such a link it would also be a perfect opportunity to create a phishing attack on my site itself by posing a log-in page as required to view the document provided. Cha-ching!

But then I’ve already established you guys strictly conform to the view that if a user clicks a link they’re an idiot who deserves any fate as a matter of policy. Which makes me wonder why you decide to care now whilst leaving other holes open.

karrde July 21, 2011 8:22 AM

@RSH, if you’re right, that makes the detection-of-malware scenario a little less strange.

I’m amused that the malware used a specific IP for a Google datacenter as its ‘Am I connected to the internet?’ test. That’s something I do when my at-home Internet connection needs to have the cable-modem reset. However, I usually issue a ‘ping -c 4 http://www.google.com‘ command.

If the malware uses a specific IP, then anyone who scans the EXE for strings won’t see the string ‘google.com’ stand out. But if they are suspicious, and looking at the strings in the file, will they see an IP address? Or would it be hard-coded as the 32-bit number usually represented by the “W.X.Y.Z” string format?

What other advantages would the hard-coded IP provide?

Richard Steven Hack July 21, 2011 9:50 AM

Karrde: The hard-coded IP has to be a string in the code, I would think, since it has to executed by a ping command. Although they might have been pinging an actual server name. More likely it was just a random IP they picked out of the network assigned to Google, which just happened to be out of the search queue at the time.

You ping google.com if your DNS server is working. You ping an IP address if it isn’t.

I was just doing that last Saturday when DSLExtreme had yet another outage in my area (the third in a couple months). Neither worked, unfortunately. I called and was told they had an outage. Later, my router picked up an IP address and DNS server IP addresses, so the servers were back up – but pinging any URL didn’t work as DNS still wasn’t working until hours later.

tommy July 21, 2011 9:22 PM

@ Richard Steven Hack:

Thanks for clarifying. I missed that they had taken the server off-line, yet traffic was still being directed, not to google.com, but to // that specific server //. Yup, red flag.

So was the fact that it wasn’t search queries at all, but rather pings. Neither of that was in the original article Bruce linked; actually, the article said “search traffic”:

“Recently, we found some unusual search traffic while performing routine maintenance on one of our data centers. After collaborating with security engineers at several companies that were sending this modified traffic, we determined that the computers exhibiting this behavior were infected with a particular strain of malicious software, or “malware.”

That didn’t tell us anything. Thanks for the info from the follow-up interview with Krebs.

But now, it leaves another question: If it was just ping traffic, then the benefit I tried to picture — getting your web site or whatever searched for frequently — disappears. So, what is the benefit to the attackers? Doesn’t sound even close to be enough to DoS them — unless it was a test, with a massive ping flood later. But I’d not have tipped my hand if I were EvilDude — I’d have all my thousands of bots in place, then push the button.

Also, I guess I owe Google an apology for //this particular instance//, since it is indeed strange that an off-line server is getting specifically-targeted traffic. Doesn’t change the overall view, though.

@ karrde and Everyone Else:

For testing connectivity, I just ping http://www.example.com. It works; it doesn’t put a load on real sites, and IANA doesn’t seem to mind. Just a thought.

GreenSquirrel July 22, 2011 6:38 AM

@Tommy

“So, what is the benefit to the attackers?”

It lets them know that their malware is connected to the internet.

It doesnt seem to be part of a ping of death or anything, but reading through bits here it looks like this is just a side effect of whatever the malware does to check its connectivity.

I assume that this would then be followed up with a connection to whatever command channels it uses before it does its thing.

If this is the case (and I dont think what google have put out on a dodgy looking blogspot site gives enough information to work it out fully) then at best Google have closed down a small subset of malware. There could be countless other packages quietly pinging other google servers, Bing or even BT….

tommy July 22, 2011 8:31 PM

@ GreenSquirrel:

So why not just see if the malware successfully connects to its command channel, rather than using large servers that are frequently maintained and audited, as happened here? Even my suggestion a bit above about using example.com might be less likely to be detected.

Clive Robinson July 23, 2011 4:15 AM

@ tommy,

“So why not just see if the malware successfully connects to its command channel, rather than using large servers that are frequently maintained and audited, as happened here?”

I’ve actually answered this quetion some time ago and it’s one of those that you will kick yourself for.

You send network packets to google because it’s virtualy guaranteed that users on any network will send requests to google.

Thus you stay below the noise threashold of detectors on the local network. That is from the average admins perspective traffic to google is “dumb user” traffic to unknown IP address w.x.y.z is suspicious. Also for many companies “google is a business enabler” so is effectivly “mandated” even though facebook et al are chopped.

Where these malware people made a mistake was not doing the job properly, that is they hard coded an IP address in, which is not true for 99.9999% of computers using google.

If they had done an ordinary DNS look up as would have happened with an ordinary user then this would not have been detected the way it was.

So the question that should be asked is why they did not, and this might prove rather interesting…

Damian Menscher July 23, 2011 12:29 PM

We have learned a common name for this malware is FakeVimes, and are working with AV companies to improve their detection of it.

My guess of why they probed a static IP:
The malware changes the hosts file to hard-code http://www.google.com and several other domains to a proxy they control. This probably makes it hard for them to do a DNS lookup for http://www.google.com –they can’t just use getaddrinfo() as that would return their proxy IP.

The proxy operator is actively attempting to strip our warnings. We’ve been fighting back even though we know we’ll lose eventually. Hopefully we can get lots of machines cleaned before that happens!

Miriam R. July 23, 2011 2:02 PM

There seems to be lot of old information about malware in the commentary, and I’d like to clarify a few points:
1) An end user with a vulnerable browser and operating system can get malware by doing nothing but visiting a site with malicious code. It’s not necessary to respond to a pop-up, visit an obviously dodgy site, intentionally install anything or “be stupid”. It’s taken until 2011 for browsers to have sandboxing baked in, and it’s still not the default yet.
2) “Vulnerable” doesn’t necessarily mean a careless failure to do routine updates and maintenance. Even tracking US-CERT advisories, vendor alerts, installing the latest versions of everything and following the best security advice won’t stop the efforts of thousands of black hats creating new 0-day exploits.
3) Props to Google – modern malware has gotten exceptionally stealthy. End users may be infected for months, transmitting every login, password and form entry for everything, clueless until they start getting fake AV extortion-ware popups. Do a search on “TDL4” and you’ll see just how sophisticated the people who write this stuff are. Antivirus software acting within an infected operating system will never see, let alone be able to remove, most of the latest threats.
4) Once malware has been installed on a system, it’s difficult or impossible to know all the ways in which it may have been compromised. On Windows systems, these packages usually disable many security settings in the registry, leaving the system extra-vulnerable to future attack. Most “removal” techniques are oriented around disabling the functions that let it phone home with your passwords or remote control your computer, not restoring all the security settings.
4) The best advice is usually to format drives, reinstall the operating system, then restore data. Unfortunately, many end users may not have clean, restorable backups, and are compelled to clean up enough to get their data off. They’re then confronted with the fact that their systems that didn’t come with independent restore disks, they’re missing software and they have to connect to download drivers and updates using old, insecure operating systems and browsers!

From the standpoint of home and small business users [and even some medium/large ones that have unmanaged desktops/laptops without backups, minimal control at the edge of the network, and overworked IT staff] the current security environment is an ongoing nightmare.

In particular, Microsoft needs to seriously reconsider its policy of using “it’s more secure” as a means to force operating system upgrades.

If they hadn’t scrambled Internet Explorer into the operating system in the first place, Windows would have been easier to secure against web-based threats. User Access Control security is half-baked at best, and the notion that the next version of Windows user interface will be built around a browser (HTML5, at least it’s an open standard…) keeps me awake at night.

Given all this, the server team at Google has my gratitude for trying to do the responsible thing. Not the best implementation, but they had a unique opportunity to give warnings, and didn’t bury it in bureaucracy or fear of lawsuits.

Clive Robinson July 23, 2011 2:49 PM

@ Damian,

“This probably makes it hard for them to do a DNS lookup for http://www.google.com–they can’t just use getaddrinfo() as that would return their proxy IP….”

Yup that’s one problem they would have to face (there are one or two others such is the work of getting concept through to bullet proof product 😉

There is also something else you might want to consider when sitting back in the easy chair…

Put yourself in the malware writer’s shoes, and say to yourself “OK I’ve a toe hold in this network segment on this MS OS how do I expand my influence to all OS’s?”

I did this sort of thing a number of years ago and it’s interesting to see just what you can poison and how. The majority of our systems are still to some extent built on implicit network trust model of the 1960’s, such is the power of “standards” (which is why I bang on about them being a security issue along with “transparent backwards compatability”).

tommy July 23, 2011 7:56 PM

@ Clive Robinson:

Thanks for clearing that up. I’m sure I wasn’t around when you answered it before, because it’s a simple and memorable answer.

But I still don’t “get” what is the ultimate goal here. Bad guy has verified that malware connects. What next? If it’s to DoS Google or any other corporate-allowed domain, I could understand that, though the admin would still see a huge jump in the /volume/ of traffic. If it’s something else — some target that isn’t on the corp-allow list, such as phoning home to EvilDude with the passwords, etc., then the connection to that site either won’t be allowed, or will still cause that suspicious bump in traffic to a site that is not popular enough to hide the increase among the noise.

None of this applies to home users in terms of forbidden sites or traffic volume, as the malware would undoubtedly send a small volume of pws or whatever, at occasional times, probably not visible to most home users.

“If they had done an ordinary DNS look up as would have happened with an ordinary user then this would not have been detected the way it was. So the question that should be asked is why they did not, and this might prove rather interesting…”

Indeed. Good insight as always, and looking forward to seeing if we find out why.

@ Damien Menscher:

“My guess of why they probed a static IP: The malware changes the hosts file to hard-code http://www.google.com and several other domains to a proxy they control.”

Sounds reasonable. But I would hope that all enterprises lock the Hosts file to the sole control of the admin, so that the malware would have to subvert admin control of that, too. Which, of course, is possible.

Many home firewalls, AVs, and other tools also provide the ability to lock the Hosts file, though I don’t know how many of them lock it by default. Seems they should, as non-tech users won’t know to do it, and don’t make changes to Hosts themselves. Again, the malware could possibly change the firewall and AV settings first.

It also occurs to me that they should have thought to randomize the pings among a number of IPs, either within Google’s domain, or better yet (from their POV) across other large domains like Yahoo, Bing, AOL, etc. The occasional stray to a down-time server might have been much less noticeable.

Clive Robinson July 24, 2011 1:02 AM

@ tommy,

“But I still don’t “get” what is the ultimate goal here[?]”

The trite answer is “camouflage”, it is also possibly incorrect due to our not knowing what was in the malware writes head.

So I’m guessing based on my previous experiance and thus (poossibly falsely) transposing my point of view onto the malware writer.

There is the saying that scientific discovery occurs not with the “eureka moment” but with the comment of “that’s odd” that precedes it.

Security is very much like this it is the ability to spot oddities that gives rise to people finding mal – ware/intent/… either because things do not behave as befor or differently for different people.

Thus to hide successfully malware has to look and behave as expected, and the malware writer has to build it in, especialy in times of fault when many eyes are looking for a problem to be the explainable cause of the unwanted effect.

Now I’m guessing with hindsite that if you have written malware that subverts google searches, one of the things you don’t want is your malware (and it’s attendant site) to be flagging up it’s existance by dishing out search results to one user on a network when google is actually not there for other users.

That said I’m probably wrong and the malware writer had a different reason if any.

However it brings up an interesting question (for me anyway) of hiding malware from detection, to be “bullet proof” the malware has to as closely as possible mimic the real world responses both under “normal operation” and “expected fault operation”.

Whilst mimicking “normal operation” is not hard and may require only a few extra resources of CPU cycles and code space, mimicking “expected fault operation” is both hard and requires considerably extra resources.

In effect you have to build a whole catalog of environmental responses and appropriate detection systems to trigger them.

If you have a hunt back on this blog you will see conversations between Nick P and myself about “Castles -v- Prisons”.

My view point is that the majority of computers in use are “castle” systems with monolithicaly large programs and memory spaces, that alow malware sufficient “elbow room” to effectivly hide below the noise floor.

However in a prison the cells are small carefully controled environments where hiding even a very small piece of malware would be difficult, and impossible for the large piece of malware needed to emulate the correct response under fault.

Richard Steven Hack July 25, 2011 12:50 AM

Miriam R: Good points and quite correct.

“1) An end user with a vulnerable browser and operating system can get malware by doing nothing but visiting a site with malicious code.”

Yup. Fortunately the number of malware writers targeting Linux is still miniscule. Even Macs are slowly building up as a target, but Linux is still mostly under the radar. It’s easy for malware writers to hijack a browser on Linux, but hard to do much else.

It’s so much easier to hit Windows, not just because of the large installed base, but because ninety percent or more of end users – even on a lot of small business networks if not domain networks – are running as administrator by default. On those systems, hit the browser and you own the machine. Hit an un-patched system and you own the machine even without any user interaction at all, browser or not.

But on Linux, the end user can’t give permission to anything to run on the system level without at least entering the root password to a prompt. Why Windows UAC didn’t do the same baffles me. Not that it would have stopped end users from automatically doing so, of course.

“2) “Vulnerable” doesn’t necessarily mean a careless failure to do routine updates and maintenance. Even tracking US-CERT advisories, vendor alerts, installing the latest versions of everything and following the best security advice won’t stop the efforts of thousands of black hats creating new 0-day exploits.”

True. But it doesn’t help that MANY end users – my guess is at least fifty percent of home users – aren’t committed to patching their systems at all. Or they ignore that little orange shield for weeks or don’t even know it’s there, begging to update (until they turn off their machine and Windows does it anyway.)

“3) End users may be infected for months, transmitting every login, password and form entry for everything, clueless until they start getting fake AV extortion-ware popups…Antivirus software acting within an infected operating system will never see, let alone be able to remove, most of the latest threats.”

Yup. And to be able to use the more penetrating anti-malware software that can do the job, you have to know about things like processes, Windows services, the kernel, etc. which are way beyond an end user’s capacity to grasp.

“4) Once malware has been installed on a system, it’s difficult or impossible to know all the ways in which it may have been compromised… Most “removal” techniques are oriented around disabling the functions… not restoring all the security settings.”

Yup. Although there are several freeware utilities out that are oriented around repairing virus damage, by restoring Registry permissions, re-enabling access to the command line, re-enabling exe execution, fixing Internet Explorer, etc. Very useful stuff. But the end users don’t know about them.

“4) The best advice is usually to format drives, reinstall the operating system, then restore data.”

Yup. And end users issues with that process is why despite Microsoft’s recommendation to wipe and reinstall, cleaning is still the best option. Although it can be as or more time consuming and definitely more expensive than a reinstall.

The problem with the latest round of malware is that just running an AV scan doesn’t solve the problem. Even running a specific antispyware utility tends not to fully clean the system. You need someone who can run the more specialized tools that examine processes and services and detect rogue software while it’s running. More and more, you need to run one or more rootkit detectors. Then you need tools that can terminate or suspend suspicious processes and services, and then tell you what file and registry keys are running those processes, and then tools to unlock those files and registry keys so you can delete them.

Which means the end user either needs to hire a techie or they need to go to one of the spyware help sites like Bleeping Computer where they can get help. And the latter will never be able to keep up with the degree of infestation end users have.

Which is why a bunch of guys like me are making a (bad) living cleaning spyware off home users machines.

A lot of end users these days will just junk the machine and buy a new one at Best Buy for $300-400 rather than pay someone like me $25-100/hour to clean their systems. It just doesn’t make economic sense to spend hundreds on cleaning.

But of course that means they end up doing the exact same things they did before which got them infected – because they didn’t get the advice and protective utilities installed that a tech or a help site could have gotten them.

“From the standpoint of home and small business users [and even some medium/large ones that have unmanaged desktops/laptops without backups, minimal control at the edge of the network, and overworked IT staff] the current security environment is an ongoing nightmare.”

Yup. And it’s only going to get worse.

“In particular, Microsoft needs to seriously reconsider its policy of using “it’s more secure” as a means to force operating system upgrades.”

Aside from the fact that it’s probably NOT “more secure”, even it is, that merely means the hackers shift from old techniques to new techniques to penetrate the new security. Nothing changes in the long run.

“If they hadn’t scrambled Internet Explorer into the operating system in the first place, Windows would have been easier to secure against web-based threats.”

To some degree. It’s still a monolithic system which for years enabled every user to run as administrator by default and was, as is every OS, loaded with vulnerabilities due to bad system design and even worse coding practices. And that won’t change until software engineering changes and starts recognizing security as a primary design goal – along with usability and reliability – rather than just functionality.

Which is all Microsoft and the rest of the industry cares about – to sell new systems with a bunch of half-baked new “features”.

“the notion that the next version of Windows user interface will be built around a browser (HTML5, at least it’s an open standard…) keeps me awake at night.”

Heh, yup. Malware writers around the world are salivating for Windows 8! Not to mention the whole “network is the computer” (it’s not and never was) and “it’s in the cloud” world revolving totally around networking, which people understand even less how to secure than personal computers.

Richard Steven Hack July 25, 2011 12:54 AM

Clive: The problem with prisons from the prisoner’s viewpoint is that it restricts his freedom.

Guess how programmers will react when they’re told their freedom to do what they want in the code is highly restricted by security constraints.

They can’t even handle issues of usability and reliability in the industry these days, let alone security.

Guess how software companies management will react. Ditto.

The only way the industry will get security is to switch to automated AI software engineering suites.

Good luck with that. Email me when that happens.

tommy July 25, 2011 2:24 AM

@ Richard Steven Hack and Miriam R:

“Even tracking US-CERT advisories,”

You mean, like the ones in which US-CERT advised users to switch from IE to Firefox+NoScript? NoScript alone kills a huge proportion of web-based attacks, and has been known to block the execution of malware that made it into the machine elsewhere.

If not familiar, please check it out. noscript.net.

Richard Steven Hack July 25, 2011 8:19 AM

Tommy: Yup, NoScript is good – even if it’s a bit of a PITA to constantly be enabling a site when it doesn’t work right due to the scripts being blocked and you can’t tell which third-party sites are hanging things up.

I even use it on my Linux system Firefox because it blocks porn sites from hijacking my browser while still allowing me to download all the hot pics! 🙂

I recommend it to all my clients now, along with AdBlocker.

tommy July 25, 2011 5:47 PM

@ Richard Steven Hack:

“Yup, NoScript is good – even if it’s a bit of a PITA to constantly be enabling a site when it doesn’t work right due to the scripts being blocked and you can’t tell which third-party sites are hanging things up.”

Simple. Whitelist (permanently allow) your trusted sites. You can do that through the menu (hover over the NS logo in the browser bar and click “Allow goodsite.com”) or through the GUI (NS menu > Options > Whitelist).

Third-party sites: Most have no business being there, other than perhaps Akamai.net, and if you find that a certain greedy site requires allowing Google-analytics.com, there is a built-in “script surrogate” for that, and many other similar data-mining scripts. These send no personal info, but satisfy the site’s requirement to run the script.

http://hackademix.net/2009/01/25/surrogate-scripts-vs-google-analytics/

You can find the complete list of surrogate scripts by typing about:config in the address bar, then type
surrogate
in the Filter Bar.

For your novice clients, there is a Quick Start Guide for Beginners,

http://forums.informaction.com/viewtopic.php?f=7&t=268

(Any similarity between the name of the author of that document and this writer is strictly a coincidence.)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.