Evasive Malicious Code

New developments in malware:

Finjan reports an increasing trend for “evasive” web attacks, which keep track of visitors’ IP addresses. Attack toolkits restrict access to a single-page view from each unique IP address. The second time an IP address tries to access the malicious page, a benign page is displayed in its place.

Evasive attacks can also identify the IP addresses of crawlers used by URL filtering, reputation services and search engines, and reply to these engines with legitimate content such as news. The malicious code on the host website accesses a database of IP addresses to determine whether to serve up malware or legitimate content.

Just another step in the neverending arms race of network security.

Posted on June 8, 2007 at 1:53 PM24 Comments

Comments

Anonymous Sysadmin June 8, 2007 2:45 PM

I imagine one way to counter such things is for someone to develop a reputation system based on distributed requests from the client’s own computers.

Say I’m a security firm. I distribute a piece of software to my customers. It contains code to establish secure communication with my servers and only my servers (the encryption to do this exists). The user configures how much bandwidth of theirs I am allowed to use. My servers connect to this program and it acts as a customized web proxy for me to crawl the web looking for malicious pages. It differs from a regular web proxy in that the headers it sends make the traffic appear to originate from a locally running browser. A side benefit is that if I find something, the IP will be added to the list and the user protected from seeing the malicious code.

Use your imagination to fill in the details… and to think of the counter-countermeasure…

Joe Patterson June 8, 2007 3:47 PM

Heh. So if I change my browser’s user-agent string to googlebot, I’ll get less attacks?

Not that that’s effective, but perhaps amusing…

GiggleStick June 8, 2007 4:00 PM

@Joe

I do something similar on occasion. Sometimes, I’ve searched on a subject, and Google finds a local newspaper and when I get there, it wants me to sign up for a free account to see the content. Well, obviously, Google was able to see the content, that’s how it found this page (in fact the little excerpt in the results showed the sentence with the words in question.) Change your user-agent to the googlebot, and viola, there’s the content. A bit faster than bugmenot if you have the extension to change your user-agent quickly. I guess in this case, the “Sign up for an account” page was the malicious content. I think that Google frowns upon sites providing different content to its bot than to regular users, and some major companies have been dropped from results as punishment for doing that. I think it happened to BMW.

David Dyer-Bennet June 8, 2007 4:56 PM

Presenting specialized content to web-crawlers is an old search-engine optimization trick; sometimes used fairly legitemately if all it’s doing is presenting the ordinary content in a way the search engine can read (plain text instead of flash, say, in the early days), sometimes used to present entirely misleading information to the search engine in an attempt to lure more users. And yes, Google frowns on it, to the point where I at least wouldn’t dream of implementing even the benign form today, since there’s no way to argue with google if they take against it.

Another Googlebot June 8, 2007 5:18 PM

I’ve found that NPR’s website serves up transcripts to browsers with the googlebot user-agent, but not to regular browsers. Since reading is about 10x faster than listening, I always browse as googlebot when looking at the news on NPR’s site.

Trey June 8, 2007 10:54 PM

Next we’ll see a browser plugin that requests each page twice and shows the second copy.

paddy_68 June 9, 2007 7:22 AM

Next we´ll see malicious websites that shows the ‘bad page’ twice 😀

Jesse Weinstein June 9, 2007 12:17 PM

The link given in the post is just to a paraphrase of a press release from a company called Finjan. The press release is here: http://www.finjan.com/Pressrelease.aspx?id=1527&PressLan=1230&lan=3
titled, “Finjan’s Latest Web Security Trends Report Reveals New Genre of Evasive Attacks”. It announces a report they put out, “Web Security Trends Report (Q2 2007)”, which (after putting something into a “we-want-your-personal-information” form) can be downloaded from: http://www.finjan.com/GetObject.aspx?ObjId=443&Openform=50

The report includes various screenshots of the evasive attack; the attack tool they examined was “MPack v0.851 stat”. They also mention per-country blocking, as another feature of the same attack tool.

The report is worth reading, but it doesn’t go into much technical detail.

Anonymous June 10, 2007 11:32 AM

“Sometimes, I’ve searched on a subject, and Google finds a local newspaper and when I get there, it wants me to sign up for a free account to see the content. Well, obviously, Google was able to see the content, that’s how it found this page (in fact the little excerpt in the results showed the sentence with the words in question.)”

This actually strikes me as reasonable. People use google to find information, and information behind a subscription is still information. What might be better is an arrangement between google and the website in question which would simultaneously 1) allow access to the googlebot and only the googlebot 2) not provide cached results, and 3) add a “SUBSCRIPTION REQUIRED” note before the link in the search results.

Simon June 11, 2007 2:15 AM

When bored, I occasionally click on phishing links to see what the sites look like. Everyone knows that the phishing sites went from basic copies of graphics and poor grammar, to quite well polished sites which replicate all HTML, JS content and with much more polished grammar and spelling. It’s only in recent times that I have noticed that the phishers are getting even smarter, and browsing to the site twice (say, by refreshing the screen) causes the phishing hoax to be replaced by a blank screen. I imagine that it makes investigation of phishing attacks so much more difficult. The next step is probably for the phishing site to display content relevant to you (i.e a specific logon screen for Citibank, as you are a customer there), rather than the current situation where they simply hope that you are a customer of their target website.

Simon
http://www.AutoUpdatePlus.com

Anonymous June 11, 2007 4:11 AM

@GiggleStick, “Change your user-agent to the googlebot, and viola, there’s the content.”

How do I do this?

Anonymous June 11, 2007 4:17 AM

@Simon,

Even better, on the second view the site could redirect you to a legitimate site.

Paul June 11, 2007 4:19 AM

If i understand this correctly, any network which uses a NAT gateway such as an ADSL modem or forces all its web browsing through a proxy server (i.e. nearly every home/SMB network on the planet) should only ever have one machine on the network affected by this. Of course, that machine could proceed to infect others that it can contact on the LAN, but still this seems like a fairly serious limitation. I’d be willing to bet that next stop is displaying different results based on a tracking cookie.

Some interesting proxy-server-based protections (and corresponding workarounds, i’m sure) might be possible here, e.g. the equivalent of reverse greylisting web sites. What i mean is this: the proxy server could request the page multiple times (with different user agent strings?), or manage cookies on behalf of the user, and possibly provide protection for a whole site.

Of course SSL kills protections like this instantly. 🙁 When i worked at a school, i seriously considered limiting SSL access to only whitelisted sites. In a corporate network, i’m sure that wouldn’t wash – you’d have people setting up rogue ADSL connections all over the place just to get around proxy server restrictions.

Colin June 11, 2007 9:39 AM

What makes this even worse is, with Firefox, often the “view source” button will fetch a NEW COPY of the page from the server for display. Yes, you heard me right. So you could potentially load up the mal-page, hit “view source”, and see something completely benign.

TNT June 11, 2007 8:56 PM

This technique has been around for a few months. Even worse, it’s the technique some malware authors are using of loading different malware (or for some countries, no malware at all) based on the Country the IP resides in. All this done server-side… The “Gromozon” group for instance (if you don’t know what it is, look it up) has been doing this for several months now, combined with:

  • several header/referrer checks which make it almost impossible to crawl their pages with something like wget, even with “fake” user-agent headers
  • randomized location of the directory the exploits and the files reside in, changing every few minutes

The amount of work and organization that’s been developed in the last months from the malware groups (particularly the ones from the former URSS) is stupefying.

Ralph June 11, 2007 9:03 PM

Add to this the practice of posting the code via the advertising content on third party sites.

You’ve got malicious code popping up randomly through large sites that’s very hard to track.

Also;
The Finjan report has some nice examples in it.

Fuic June 11, 2007 9:51 PM

@Colin
Firefox does NOT reload pages from the server when you view source. I have checked this by watching server logs.

I’m curious to know what you’re doing to make you think it does.

Paeniteo June 12, 2007 7:08 AM

@Fuic: “I’m curious to know what you’re doing to make you think it does.”

Sometimes I am under the same impression. It just takes too long to view source or save an image to disk (there should not be any serious progress bar for saving an 300kb JPG image from memory to disk). It really makes you think that the resource is being re-downloaded.

Eam June 14, 2007 12:31 PM

@Fuic:
It usually hits up the cache, but it definitely does make a request to the server from time to time (it can be a total pain when debugging HTML output from an ASP.NET app). You’ll notice the “View Source” window actually has a Reload option in the View menu.

jedermann September 20, 2008 10:20 AM

The kind of attack described in these posts seems to be (still ?) happening at the NPR home page. On the first visit to the page, an antivirus scanner ad for something called “bestantivirus …” pops up and doesn’t seem to allow itself to be declined, seems to lock up the controls on the browser, starts to “scan” your machine. Escape by forced kill of the browser. Subsequent visits don’t show this behavior. I’d warn NPR but I don’t want to go their page to get the contact email address.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.