David Dittrich on Criminal Malware

Good essay: “Malware to crimeware: How far have they gone, and how do we catch up?;login:, August 2009:

I have surveyed over a decade of advances in delivery of malware. Over this period, attackers have shifted to using complex, multi-phase attacks based on
subtle social engineering tactics, advanced cyptographic techniques to defeat takeover and analysis, and highly targeted attacks that are intended to fly below the radar of
current technical defenses. I will show how malicious technology combined with social manipulation is used against us and conclude that this understanding might even help us design our own combination of technical and social mechanisms to better protect us.

Posted on October 13, 2009 at 7:15 AM20 Comments

Comments

Clive Robinson October 13, 2009 8:40 AM

Oh dear,

“and conclude that this understanding might even help us design our own combination of technical and social mechanisms to better protect us.”

First of all people have to understand what it is they are doing…

Most “home computer” users and a lot of SOHO’s are lke teenage “raft builders”. In that they raly do not apreciate or even think about the risks before they “jump on board”…

Clive Robinson October 13, 2009 9:13 AM

He is a little out of date in some respects.

For instance the Honeynet project although a good example of early co working has of recent times been less relevant.

On reason for this is it is possible to identify quite a few honeynets simply because they use virtual hosts. There are known methods by which multiple hosts on one hardware platform can be identified.

Those in the know will enumerate such sites and leave them alone.

It appears that the crackers have started building their own “black lists” it will be interesting to see how it develops over the next year or so.

Lazlo October 13, 2009 10:37 AM

@clive: Simultaneously, many legitimate sites are running on virtual hosts. As this trend continues, malware that leaves those sites alone will begin to face a herd immunity and find itself starved out of existence, or at least that’s one possibility.

Kevin October 13, 2009 11:31 AM

Theres a big difference between “I have survived…” (in the blurb) and “I have surveyed..” (in the PDF.

Brandioch Conner October 13, 2009 12:43 PM

The last place I worked had all of the external facing systems (including email) running on virtual machines.

And I’m in the process of the same conversion where I am working now.

Clive Robinson October 13, 2009 1:41 PM

@ Lazlo,

“many legitimate sites are running on virtual hosts. As this trend continues, malware that leaves those sites alone will begin to face a herd immunity”

It’s not just identifing “virtual hosts” that’s the first step, the next is to see what load they are under and if their geo-location is where you would expect it to be and a few other little give aways.

It is surprising just how difficult it is to fake a working network to a carefull observer. It’s a bit like a “human anamatronic” there are subtal give aways that alert the cautious.

And lets be honest there are always going to be plenty of non virtual hosts that a cautious attacker can go after.

And the joy of some of the enumerations is they can be made to look like “script kiddy” type scans and not the more subtal scans they actualy are. And thus may well be missed for what they actualy are…

Kerry Thompson October 13, 2009 5:22 PM

@Clive,

I think Dittrich covered the point of Honeynet hosts but didn’t he really go into some potenentially interesting detail, and that is malware recently being deployed won’t give the whole botnet game away if it is captured in a honeypot. The encryption, signing, and use of binary code segment dropping demonstrate that even researchers with significant resources won’t be able to take control of the botnet if they capture a single infection.

Clive Robinson October 14, 2009 1:05 AM

@ Markus Jakobsson,

“An example of a likely future development is mobile malware”

It’s an area I’m accutely aware of and have been banging on about since the 1990’s. I’ve a few suggestions of my own (start think thin/light client type solution).

“What is fascinating – and worrisome – about this likely future threat is that the current anti-virus paradigm is not well suited to address it.”

It is ill suited not just for mobiles but for nearly all Internet connected systems. And it is “so last century” (1980’s) in the way it works and fails every minute of the day.

The current system of dealing with malware is a “Red Queens Race” and an expensive joke for most people.

And it only exists in it’s current form because of a near zero cost of delivery.

First of anti-virus is not “self protection” it is protection by proxie, and like most “choke point” security it is not very effective.

In some respects it’s like renting “door minders”. You pay money for them but they work for somebody else and do what their boss wants with only a nod in the direction of what you want. They only mind the door and they often fail to stop your smarter naredowell getting in, and are usless from that point onwards.

Oh and like many things in life you pay for what you get, but price is no indicator of quality.

Secondly anti-virus is beyond a joke in terms of resource utilisation on anything other than custom platforms. At some point it would be interesting to see which occupies more Internet bandwidth EMail or Anti-virus updates. And most of us here know which is growing faster…

The simple fact is the current anti-virus mode of operation is an out moded and poor model of doing security thought up before PC’s where networked.

It is only viable as a business model because the companies don’t pay for transport of their product, and due to this there is currently no incentive for those companies to change the model.

And unfortunatly it will carry on in this way untill some significant external force makes the business model nolonger viable and forces real inovation to happen.

One of the things that appear to have bypassed a lot of security thinkers is that the battle is nolonger at the OS level.

Many applications (especialy web browsers) do what the OS used to do with regards multi tasking, resource allocation / sharing and inter application comunication but importantly without any of the security.

And this is the real problem application developers in general no little or nothing about security and their employment model often does not encorage it’s use where they do have it.

The number of people that can think in the appropriate “security mindset” manner is so small that they are effectivly an invisable resource as far as the current application development model is concerned. And will probably continue to do so for many years to come.

And this is the rub it is also the “application development business model” that is broken as far as security is concerned and this needs to change as well.

It will be interesting to see what happens with Google’s chrome and gears and if other people will follow in that direction.

However untill the software industry starts treating “security” in the same way as the manufacturing industry had to come to terms with “quality” as a fundemental part of the process we will be having this conversation in 20 years time (assuming I get that long in the tooth).

And as much as I hate to say it the best way to get it started is to make Internet use “pay by originated volume” so that distrubution costs become a very very real part of the business model.

Tony H. October 14, 2009 5:05 PM

“Simultaneously, many legitimate sites are running on virtual hosts. As this trend continues, malware that leaves those sites alone will begin to face a herd immunity and find itself starved out of existence, or at least that’s one possibility.”

To some extent, yes, but desktop systems are rarely running on virtualized hardware. That would seem to cut the problem down by quite a bit.

Rob Lewis October 15, 2009 6:35 AM

@Clive,

You are pretty well on the mark about AV. In 2008 I listened to a Symantic VP a few days before their threat report was due officially out, we were seeing an advanced preview, and his comment was “Well, I don’t know the answer to this.” Things are far worse now of course.

It appears that many things bypass security thinkers these days, but few are as sharp as you. However, a patched OS is not necessarily an inherently secure system, just a less vulnerable one.

Perhaps you would see value in a technology where nothing in user space (people, applications, processes-code using systems interfaces or libraries etc.) can bypass behaviour enforcement that can be run from the OS kernel. We use a technique that augments kernel security enforcement capability by injecting internal controls that raise security levels without impacting functionality or performance.

This technology can use any and every parameter within the operating system to specify a rule within its ruleset. It is not, however, just limited to the OS parameters. It can also query sub-applications and external values means that there is virtually no limit to the kinds of rules that can be created for use with, and enforced making it possible to create rules that map to the business operations.

Thus, it is possible to secure systems that have unpatched applications since malware that attempts to violate a behavior stipulated as a business rule it will be denied and will not be allowed to execute. This obviously has potential for dealing with mobile malware.

Clive Robinson October 15, 2009 1:47 PM

@ Rob Lewis,

“However, a patched OS is not necessarily an inherently secure system, just a less vulnerable one.”

Absolutely, but unlike the majority of user applications atleast there are security mechanisms that are understood and available at the OS level (or there should be in a modern OS).

Although OS’s are not as secure as they could be (even when tightened down by an expert), the malware battle has by and large moved on because the OS is not the low hanging fruit any longer.

Many applications are multithreaded and have access to multiple resources and can have multiple instances of themselves not just at the same priveledge level but in the same unrestricted memory segment.

Thus they have little or no segregation which a sensible OS would normaly enforce between multiple processes running under the same user.

And thus it is the application like a web browser that is currently the low hanging fruit, especially as it allows programs to be downloaded and run as part of the normal user activity.

Even “sandboxed” scripts can communicate via access to shared resources and often leverage themselves or have influence “out side of their box” in one way or another.

A year or so ago people where saying “Ok it can be done but to what end”, currently we are starting to see code that is site and user specific that hides transactions on electronic bank statments.

What is the next step for malware developers?

How long say before scripts from two different sites become aware of each other through a users browser.

Let us say that you are shopping on one site and a script from that site becomes aware of another site you have open and opens a covert channel to pass info across. In essence one site could influence the behaviour on another site.

I cannot immediatly think of a realy good example of why malware writers would find this advantageous but I’m fairly sure that at some point one will…

But the point is that we are yet to see what “fruit” the next malware will use. But whatever it is the chances are it will only be “obvious with hindsite”.

And this is the hidden problem with mobile devices with low CPU and Memory resources and at best modest connectivity, patching applications on the mobile device is problematical.

Now if you think about what Google is trying to do is make a lite / thin client with the application running on a high resource available server.

If you patch the application on the server then effectivly all the mobiles that use it get the patched version immediatly.

I would be the first to admit the concept is not new, and that lite / thin clients (and X-terms) never realy had much market traction due mainly to lack of price differential between a thin client and a full blown PC.

However with mobile devices the situation is very different.

Take the ubiquitous phone handset for example due to limitations of batteries and physical size it is extreamly resource limited and there is little that can be done to change it within the constraints of current cost effective technology.

It therefore makes sense to devote those meger resources to the user interface and to conectivity. Which in essence is what a thin client is.

Some modest applications can be written as scripts to run within the ‘lite browser” however “heavy lift” applications would run on a server along with storage and other back end productivity resources (the dreaded “group ware” etc).

With regards to the server end of things the trick is to have a framework were an application need have no security awareness, the framework deals with it.

Which is very much what you describe. However the framework should provide more services than would normaly be expected of an OS. In effect it should provide access to the usuall business back ends.

Thus what the application realy becomes is middleware where the “application logic” is effectivly “scripted” together filters / tools in a similar way to the “unix philosophy”.

The application developer concentrates on the application and error/exception handeling. The framework provides the security and “heavy lift” such as DB searches etc.

Which leaves the user interface that is perhaps best done as an abstracted “virtual display” anyway as this alows various levels of hardware resourse to be used transparently. Very light low bandwidth mobile devices can have most of the work done by a server that has a frame buffer and sends just “diffs” down to the mobile (VNC style), through to more capable devices using a higher level protocol such as you would expect from a full blown web browser.

Surprisingly all of this can be done with the current level of technology we have, however we have to accept that there is a significant cost to “security” in terms of “efficiency”.

Security takes CPU cycles, however proper segmentation usually means “one task per module” where the module has it’s own kernal and memory with access to all other resources through the framework. Simplicity dictates a “one size fits all” methodology for the modules which is unlikley to lead to optimal usage.

However is this lack of efficient utilisation of resources realy an issue?

In reality only for the marketing dept “bang for your buck” figures.

We are happy to accept worse resource utilisation for “high availability” by fault tolerant redundant hardware.

Clive Robinson October 16, 2009 1:36 AM

@ BF Skinner,

“You need a bigger audience.”

At the risk of seeming to be ireverant, for some reason your comment made me think of,

“First there was the word… …then God created Adam and Eve… …and they begat…”

Looks like I’ve some work to do, pass me the clay… 8)

Nick P October 17, 2009 1:50 PM

Clive, your last post was about 900 words covering 2 pages. I think you could successfully tackle many projects, but I’d love to see you write a textbook on concise writing. 😛

Your opening paragraphs cover lack of isolation of application components. While separation kernels can solve this, we’d love to improve the security of existing OS’s and app’s, huh? I think it would help if we mentioned the real problem, which you described but didn’t specify: application developers are always reinventing the wheel for security and functionality. They roll all their own functionality instead of reusing OS- or library-level functionality. Take Apache for example.

Like an OS, it has the concepts of users, subjects, objects, and permissions. It could feasibly be constructed to use underlying UNIX/POSIX security layer to handle all (or almost all) of that. I wrote a partial design yesterday to confirm this. Instead, they wrote their own reference monitor, supporting code, etc and integrated it with other bug-prone components, all in same process space. And there were lots of bugs and any bug in Apache bypassed the whole reference monitor. I think what needs to happen is developers should write code to use the existing functionality present in OS’s and high quality libraries, and then we can just focus on improving them. In the wheel analogy, everyone would be using the same wheel, which would get more reliable every year and we’d never have a flat. 😉 One deviation: I prefer to have several “standard” approaches to use. I’m a strong proponent of what I call security through diversity. I don’t want a bug in one component taking everyone’s apps, but we’d still be better off than now with just 2 or 3 high assurance middleware layers or frameworks for the most common functionality.

Your web browser covert channel was right on the money: it’s already being done. I can’t say anymore about that, though. What I can say is that I used scripting in the browser to create a covert channel in a previous design for command and control of a botnet. The updates, goals, etc. would only be transferred when the user was browsing the web, via a benign-looking plugin. The traffic would look like HTTP, and the security warnings would appear to be caused by the user’s own browsing. The same trick was reused in a universal, covert comm’s client. I hate browsers and their bugs/holes, though. So, I dropped this approach and turned to cross-platform C++ or Java. If the design and coding are done right, this is much safer than doing that stuff in a browser.

I have to disagree with your points about mobile targetting. Sure, they are resource constrained and battery issues cause plenty of downtime. They are far from useless to attackers, though. I remember all the fun of using zombified PC’s in the late 90’s, which had less hardware than today’s smartphones and only occasionally went online via 28Kbps modems. I see today’s expanding smartphone market as history repeating itself. And chip vendors are making it even more lucrative: ARM is about to release a 2GHz processor with lots of cache that uses only 1/2 watt of power. The phones keep getting bulkier OS’s, better hardware, and more application-level functionality. All of this makes them an easier and more attractive target. They certainly aren’t the lowest-hanging fruit, but they aren’t that high up the tree either. I expect to see more mobile attacks in the future, at least from more sophisticated criminal groups. Most will still hit the vulnerable browsers and server apps, though.

Btw: Thanks for the dark reading article. Most of my designs are geared for confidentiality and integrity, but not necessarily availability. I might include that tarpitting trick in the future.

Clive Robinson October 17, 2009 4:51 PM

@ Nick P.,

“but I’d love to see you write a textbook on concise writing.”

Hmm how about a small leaflet? 😉

With regards to,

“we’d love to improve the security of existing OS’s and app’s, huh?”

Yup, and the only real way I can see to do it is by minimal proved software “components” running in a strictly controled and monitered environment or “container”.

That is you end up with the “unix philosophy” of small efficient tools “scripted together” to provide a larger application. The difference being you effectivly have two scripts, one to link the component tools together the other to control the access the individual tools in their secure containers have to resources and in what way at each point in the application.

Yes you are effectivly writing the program twice but one is for the program logic but no security and the other for resource access and thus security.

For instance apart from the waste of resources do I realy care if part of your application reads the entire corperate DB in if it cannot pass the information on to anything else, because it is isolated from any resources it could use to communicate that information?

This splitting of application and security is already part way there, I just think we should move further down that path.

Small tools that do specific well understood functions that have been properly tested become components available for larger scripted applications, is a time proven and fairly well understood method of producing bespoke aplications. All I’m realy sugesting is it runs in a “security framework” which has it’s own script that says what resources each tool can have access to from it’s security container at each point in the application.

This way the application developers do not have to worry about security, that is taken care of by the security script that should be a mandated part of the functional specification.

Security by controling resource access is not perfect but is a lot better than most current ways of doing things.

The downside however is it not a particularly efficient way to do things, not that I think that actually matters. And with a little thought can actualy leverage in fault tolerance and thus high availability.

Oh there is one other upside, the scripted tools aproach whilst inefficient in non human resources does actually improve the utilisation of expensive human resources.

Effectivly the script becomes a very high level language and the old rule about “five good lines of code a day” still applys, but each line of script may easily be worth a couple of hundred lines of C / C++ / Java or other low or intermediate code.

With regards,

“I have to disagree with your points about mobile targetting. Sure, they are resource constrained and battery issues cause plenty of downtime. They are far from useless to attackers, though.”

I think you have misunderstood what I was saying about mobiles.

I’m accutely aware that they are going to become the next big battle ground. And that attackers are going to find them esspecialy appealing.

The problem with them is the old “bang for your buck” which will encorage features over security nine times out of ten.

My viewpoint is their lack of resources will make them more vulnerable to attack as application functionality will be given priority and will be “shoe horned in” at the expense of error and exception handeling and data input validation.

So from a security aspect I would not go down the path of putting applications on them (ie treating them as mini desktops) but use them as terminals connected too an application server (ie treat them as thin / lite clients).

In this way you can reduce the security surface on the mobile, whilst having rapid response to attacks on the server.

It is a far from perfect security solution I admit but if managed correctly far better than that of putting striped down high functionality applications on the mobiles themselves.

Speaking of re-inventing the past. One security expert was saying that the iPhone was going to be more secure as it only runs one app at a time…

Oh dear does he not remember DOS and TSR’s of the 1980s and the redirection of software interupt vectors to add extra functionality…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.