Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage—ranging from crashed machines to clogged networks—because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PM19 Comments

Comments

Nicholas Weaver December 5, 2005 3:40 PM

Actually, (reading just the summary, not the Nature? article), this isn’t a “white worm” approach, this is really just a “Honeyfarm (use honeypots to analyze & create a filter) and distribution” paper.

But IMO, it doesn’t sound interesting, and it is an argument against Nature (and Science) publishing security papers: they don’t have a good basis for reviewing them.

Many researchers (myself included) have been proposing such things for years (I hyped the possibility 2+ years ago at a Usenix WIP) and many are currently implementing such things, including the research group I’m in (both here at ICSI and Stefan Savage’s group at UCSD), the Collapsar group at Purdue, many members of the Honeynet alliance, etc etc etc.

The real contribution of the Nature paper it sounds like, the simulations themselves, just aren’t interesting. This is already the “This will work if you solve the engineering problems” is blatently obvious. We’ve known the sensitivity requirements (detect is when approx 1/k is infected, if you have K addresses (not necessarily honeypots) and the worm is random) since my first WIP.

Bruce Schneier December 5, 2005 4:40 PM

From the New Scientist article, it’s hard to figure out what they’re automatically sending out. It’s a “countermeasure.” Is it a filter to block infection, or a patch to prevent infection, or a piece of code to recover from infection? I don’t know.

You’re right that the basic ideas are all old, and that I read about a bunch of them first in your papers. But reading the article, it seemed like a white worm.

Nicholas Weaver December 5, 2005 5:21 PM

It seems more like consentual patch distribution, not white worm to me: otherwise they could never get the simulation to work (A white worm has to be vastly better engineered than the malicious worm. To outrace a worm, your spcead pretty much has to be through a consentual mechanism as those are faster).

I’ve sketched out similar using Akamai as a proposed mechanism: My bet is I could get <10 sec signature distribution easily, <2 sec without much effort.

Koray Can December 5, 2005 5:35 PM

If this is not a form of playing catch-up, then I think this statement from the article alone needs a separate paper and news story:
“But the honeypots would attract a virus, analyse it automatically, and then distribute a countermeasure.”
Can Symantec et al auto-analyze brand new virii at all? Has this (or will it ever be, per the Halting Theorem) ever been possible? How are they going to guarantee that no attacker can attack the analysis to make it produce and mass-distribute another malicious worm?

Nicholas Weaver December 5, 2005 6:34 PM

Building auto-analysis facilities with a honeypot is an area of ongoing active research among several groups, but it seems to be fundimentally sound.

Symantec has a lot of tools also for autoanalyzing virii and the like. So its doable.

However, at Symantec, the biggest worry is false positives: Their business model is “NO FALSE POSITIVES”, as such, they are extremely reluctant to use a signature-push mechanism which would be fully automatic, at least if you ask people there publically.

datarimlens December 5, 2005 6:37 PM

But …
Patching (as in patch forever with human support) is broken too. What is a better way to fix the broken patch system Bruce?
Yes, autoupdates are dangerous, but only because the integration of the patch into the running system (choose options, adapt to local host/validate, stop or uninstall patch (retain old state),update log somewhere) are completely insufficiently automated. Any ideas how to do better, at least for certain classes of systems?

Michael Graham December 5, 2005 8:12 PM

The first “good” worm was created at PARC many years ago to check out the network about the time they were working on Ethernet. I forget the author but one of the old IEEE pubs had something about this.

Nothing is new, just recycled. The Internet is just the 1890s world-wide telegraph system with a better user interface.

Nicholas Weaver December 5, 2005 8:27 PM

Shok and hupp, experiments with the worm program I believe is the title.

VERY good paper BTW. And also concluded “good worms don’t work”

MOz December 5, 2005 8:50 PM

I assume there’s some exciting work to be done on the automatic hack, I mean “patch” distribution system too? Otherwise I suspect it would make a much richer target than the usual ones. If it’s just signature distribution that’s not quite as bad but it would mean a common (open?) format for signatures.

Paul O December 6, 2005 12:46 AM

Let’s compare the propagation of a “benevolent worm” to an ordered announcement to all subscribed machines that a new “benevolent” patch is available (RSS/push?), and creating a distribution hierarchy for that patch (in the spirit of the distribution hierarchy for DNS).

A machine can be set up to auto-update (a la Microsoft Update … UGH) or the user can be prompted to apply the update. Either way, the coverage (“innoculation”) is explicit (rather than more random as for a worm) and the entire process is managed.

Preferably, a network administrator will be able to confirm which machines under his/her control have had the patch applied, and take any appropriate action for those machines which have not.

Fuzzy December 6, 2005 7:11 AM

@Paul O

Let’s compare the propagation of a
“benevolent worm” to an ordered
announcement

A worm is essentially a peer-to-peer (P2P) distribution system.
It shares the P2P advantage of minimizing the bandwidth requirements for the original distributor.
It increases the problem of trust. When your update can come from any peer, the question of “do I trust this peer” becomes much more difficult.
As was pointed out in other comments, unless the peer list is known (calling tree) the distribution task becomes a race between the “bad worm” and the “good worm”.
With a known peer list, the list becomes a vulnerability. The “bad worm” can use it as a more efficient means of finding targets.

AB December 6, 2005 8:17 AM

datarimlens – I think you are missing the point. It isn’t the patching, it is the distribution mechanism. If I find out my company’s core business app (i.e. it breaks, we break) is trashed by this patch (something missed in testing), I have no way of stopping the white worm from trashing every PC in my company. With the current patching tools this functionality is included.

aetius December 6, 2005 3:54 PM

We’re all aware of the current problems. Say we create a system that is able to analyze a particular malware and produce a signature in less than 10 seconds. Even if one had access to large numbers of backbone routers, and automatically disseminated that signature to a blocking mechanism … yeah, you can already see the problems with that. If the dissemination is not automatic, it can’t be fast enough to block the spread of the malware — anything after that is irrelavent to preventing the spread of the malware, but may help with mitigating its total effect.

So far, the only solutions that present real mitigation capabilities are:

1) after-the-fact passive signature and blocking mechinisms.
2) better software.
3) defensive mechanisms to catch mistakes.

Solution 1 is eternal catchup, and only deals with existing problems – however, it mitigates those problems fairly well and is able to sometimes undo previous damage or reduce overall impact by “cleaning” systems and removing them from the pool of infected machines.

Solution 2 is actively being worked for and against by various forces – there is hope on that front, but it is years away. This really comes down to real standards, and I agree with Bruce that liability is the only way to force commercial software companies to take software quality seriously.

Solution 3 is a difficult engineering problem, but things are getting better on that front. Solution 3 is probably the thing that is going to make a real difference in the next couple of years (in my opinion). Trusted executables, minimum service rights, chroot jails, data execution prevention, all those technologies are a whiff of a start in the right direction. This is essentially improving the immune system of the members of the group, making it harder and harder for malware to worm its way in through the cracks.

The state that the system is in when it first encounters the malware is the critical problem — anything else is damage control and mop-up.

jmr December 6, 2005 8:01 PM

Koray –

Yes, it is possible. If you accept that any software installed on your computer without your explicit intent is “malicious”, then detecting unknown software on a honeypot is trivial. Mark Russinovich has his Root-Kit Revealer that works on a related idea.

Your honeypot could look for files installed on the local machine or changes to files on the local machine, analyze the change, and at a minimum send out detection code.

If your honeypot is running on a logged emulation, you can even back-trace to find what process on the emulation created or modified the file in question.

Given that any permanent worm must modify the filesystem, you could write a piece of software that automatically notices certain changes to a logged emulated honeypot machine and figure out which process did it, then automatically generate a program that prevents that specific filesystem modification (and maybe kills the offending process?)

It’s an expensive proposition to undertake, as far as computing resources go, I’m sure, but it is possible, once you accept that arbitrary filesystem modifications are generally “malicious”.

Now, any attempt to patent this idea should fail, as I’ve demonstrated prior art, no?

Koray Can December 7, 2005 1:14 AM

@ jmr

That scenario would work, although I couldn’t stand it. Every computer has to mirror the honeypot, so nobody can run an application that the honeypot doesn’t. You can’t compile and run any new programs on this machine. I suppose that’s the idea. A machine on which every process and every port is known.

anti December 7, 2005 11:46 AM

@jmr

If a malefactor knew about the honeypots, they could use that behavior for malicious purposes.

For example, write a worm that modifies a file that normally and frequently IS modified, then attack the honeypots, have it generate an automatic countermeasure, and watch the fun.

This is the way some bio-viruses work: attack the immune system itself, turn it to its own purposes, and kaboom.

Eran Shir December 8, 2005 1:41 AM

Hi, a friend just pointed me to this discussion, and I had a couple of comments.
First, though it was conceived in this way by the public, our paper is not a security paper. Its main contribution is analyzing theoretically the construct of multiple correlated overlapping networks, showing that the dynamics on such a construct is very different than on a vanilla network.
Second, while we didn’t invent the idea of distributive immunization, (and in the paper we put references for papers back in ’97 on the subject), it was (and from most of the comments above I presume still is) considered impractical. On this point I disagree. There are many good objections that can be raised, like the auto-immune effects issue and infrastructure hijacking but these are solvable issues that should be handled during design.
Finally, we do not propose distributing either a worm or a patch. We are talking about distributing a signature which represents the Kolmogorov complexity of the virus, a low payload data file. In that regard it is much less problematic than current updates deployed on anti virus softwares.

While I’m not a security guy, I find your discussion quite interesting, so I’m happy I got to know about this site.
Best,
Eran

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.