Benevolent Worms

This is a stupid idea:

Milan Vojnovic and colleagues from Microsoft Research in Cambridge, UK, want to make useful pieces of information such as software updates behave more like computer worms: spreading between computers instead of being downloaded from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software worms spread by self-replicating. After infecting one computer they probe others to find new hosts. Most existing worms randomly probe computers when looking for new hosts to infect, but that is inefficient, says Vojnovic, because they waste time exploring groups or “subnets” of computers that contain few uninfected hosts.

This idea pops up every few years. This is what I wrote back in 2003, updating something I wrote in 2000:

This is tempting for several reasons. One, it’s poetic: turning a weapon against itself. Two, it lets ethical programmers share in the fun of designing worms. And three, it sounds like a promising technique to solve one of the nastiest online security problems: patching or repairing computers’ vulnerabilities.

Everyone knows that patching is in shambles. Users, especially home users, don’t do it. The best patching techniques involve a lot of negotiation, pleading, and manual labor…things that nobody enjoys very much. Beneficial worms look like a happy solution. You turn a Byzantine social problem into a fun technical problem. You don’t have to convince people to install patches and system updates; you use technology to force them to do what you want.

And that’s exactly why it’s a terrible idea. Patching other people’s machines without annoying them is good; patching other people’s machines without their consent is not. A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

EDITED TO ADD (2/19): This is worth reading on the topic.

EDITED TO ADD (2/19): Microsoft is trying to dispel the rumor that it is working on this technology.

EDITED TO ADD (2/21): Using benevolent worms to test Internet censorship.

EDITED TO ADD (3/13): The benveolent W32.Welchia.Worm, intended to fix Blaster-infected systems, just created havoc.

Posted on February 19, 2008 at 6:57 AM48 Comments

Comments

Clive Robinson February 19, 2008 7:27 AM

Like a lot of “stupid ideas” it is actually a “sounds sensible” one that has uncontrolable side effects. Which is of course the real danger (eternal vigilance being the only defence).

I think it was PARC that did the original work on this back in the very early days of networked computers (1980ish).

I suspect that MS will come up with the same thing again under a new “Trusted Platform Initiative” or whatever it decides to call it’s next attempt to own your hardware and work.

Ronald van den Heetkamp February 19, 2008 7:59 AM

Sounds like a bad idea to me.

I agree Bruce, I can see different ways in exploiting this. One is to clone it’s behavior and –probably– whitelisted signature to evade IPS blocking or IDS detection. Secondly it requires privileged access to the machine in order to patch it. Thirdly, the possibility of flaws in the worm itself that can make it hostile or susceptible to modification.

clvrmnky February 19, 2008 8:12 AM

Not new, of course. Cohen’s seminal work on “live programs” all those years ago specifically suggested the technique might be used for distributed software and patch delivery.

Unix Ronin February 19, 2008 8:15 AM

Brilliant! After all, you’ll always be able to tell a legitimate, clean, safe update from a trojanned fake because … um … because, uh …

(wait. How does that part work again…?)

Lazy Lemur February 19, 2008 8:18 AM

I don’t necessarily agree – Though it is a difficult moral dilemma. What I see with benevolent worms is the lesser of two evils. Sometimes, there are only two options.

n0_j0 February 19, 2008 8:49 AM

Lazy Lemur:

What happens when a “benevolent” worm patches your production systems, but causes them to break? Is that really less evil than any other worm that changes your production systems and causes them to break? I fail to see why that’s the lesser of the 2 evils. If they were to go ahead and do this, they would just increase the number of worms out there and increase the problems caused by them.

Dewey February 19, 2008 9:02 AM

I, too, am not so sure this is a completely bad idea.

Unfortunately, I don’t see a good implementation and I do see a slippery slope, but anyway…

Here’s a solution that appeals to technocrats…

Rather than MS creating virus patches, MS creates normal patches (patch Tuesday is bad enough, thank you). Simultaneously, someone (it doesn’t have to be MS) creates a worm that exploits a vulnerability and that when it succeeds, it installs the patch for that vulnerability.

Thus, those who can defend themselves against worms can also defend themselves against involuntary patches. Those who can’t have at least a chance of being patched involuntarily. Of course, it’s now a race between “benign” and “malevolent” patches.

The slippery slope is that it’s not a stable situation — if we imagine that it were a fully established and mature situation, then I can certainly imagine RvdH’s situation above where AV makers start “whitelisting” signatures, which directly undermines my Utopia of self-defense.

Seth February 19, 2008 9:08 AM

There are already evil worms that patch all the holes they know about, and remove other evil worms: they don’t want competition.

Rohan Verghese February 19, 2008 10:22 AM

It’s only a bad idea if you apply it to computers that you do not own/control.

It might be an interesting idea to propogate patches across a large server farm like Google’s. It would have the advantage that it wouldn’t take down the entire server farm at one time, and patching would be semi-automatic. You wouldn’t have to explicitly target individual machines for patches.

Kevin Schofield February 19, 2008 10:43 AM

Go read the actual research paper. The paper is about measuring network performance and efficiency of content distribution mechanisms, which you could use for dozens of different things.

http://research.microsoft.com/~milanv/MSR-TR-2007-82.pdf

There is a brief mention of worms (benevolent or otherwise) as one of many potential applications for non-hierarchical content distribution, but the paper is not in any sense about developing benevolent worms.

Anonymous February 19, 2008 10:43 AM

@Nicholas Weaver

Interesting paper, thanks, was worth reading.

@Rohan Verghese

Why would you let a worm loose inside it if you can apply patches in a network anyway? It’s just a matter of mapping the network and applying patches. Fair guess that’s automated already. It’s a whole different ballgame to create worms that replicates itself. Basically it’s a ton of overhead as many cited above; stability issues can arise, because the self-replicating worm must be stored somehow and usually this happens inside memory or a vulnerability in the host that needs to propagate the worm to other hosts, that requires a reset or another worm that dumps the stack, and on and on until your only rescue it a distributed reboot.

Far too dangerous IMHO.

Ryan February 19, 2008 10:51 AM

A bit of devil’s advocacy..

I don’t see how benevolent worms would be any worse than malevolent worms. Considering that both would be relying on the same security holes, the benevolent worms will only “infect” the same systems that would have otherwise been infected by the malevolent worms.

Everything I’ve read seems to assume that these “white worms” would be released as a primary defense, which seems odd to me. Of all the vulnerable computers, a subset will be manually patched before exposure. Of the remaining, that are usually destined to be infected by the bad worm, what’s wrong with them being infected by a benevolent worm instead? At worst, there is no difference in outcome (either through being written poorly, or simply not spreading fast enough to have an impact). At best, you’re looking at a measurable number of systems that are now invulnerable to the initial attack and have an extended grace period in which they can be patched properly. Security is best done in layers, after all. If used, a white worm should be for broadening the reach of security measures like patching, not replacing them.

..and with that bit of devil’s advocacy aside.

There are significant legal consequences to benevolent worm deployment. The only types of groups that I can think of would not be warded off by the law are the kind that think they’re above the law (vigilante justice groups) and the kind that aren’t held accountable for breaking the law (apparently any gov’t organization involved in the war on terror), and I can’t endorse either.

Rohan Verghese February 19, 2008 11:41 AM

@anonymous, you don’t necessarily have to use a vulnerability. It’s just a self-propagating patch. Each node would have software that “accepted” the worm.

As for why, if your network is sufficiently large, it might be hard to map the network. Indeed, by the time you finish mapping the network, new nodes may already have been added or removed.

A self-propating worm would allow you to patch a network in flux with relatively little effort. You could think of it as “distributed patching”, if the word “worm” has too many negative connotations.

Bob February 19, 2008 11:44 AM

Any form of spontaneous patch application would be a nightmare for anything that requires a computer’s configuration to be managed.

An example is software development and testing. I’m working on a project to replicate the functionality of some old software, so I’ve got an old iMac sitting here running Mac OS 10.3.0 that runs the original software. If this were to be spontaneously updated to a newer OS version, it would be a big pain in the neck.

Then there’s the question of buggy patches. My mom once called for advice because she received the “Windows Genuine Advantage” patch and wase was among the 20% of folks who were wrongly accused of being a pirate.

So my Windows and Mac and Ubuntu boxen all have automatic patch application disabled. I tend to wait a week or more before applying Windows or Mac OS fixes because they’re frequently flaky. Again, if some spontaneously-applied patch brought down one of my development machines, it would be a big pain in the neck.

SteveJ February 19, 2008 11:45 AM

If it’s reasonable to publish a buffer overflow vulnerability in a piece of software, then surely it’s also reasonable to test whether, once a patch is released, users of the software are actually installing it. “90% of users are vulnerable” is almost as bad as “100% of users are vulnerable”. A worm could certainly tell you that.

But none of this requires a worm. If some largish organisation wants to test patch uptake or force patches on people, they could do so at least as efficiently via a hierarchical network as via replicating software. And if some individual wants to do it, without the resources to run enough nodes to cover the user base of particular software, then they probably shouldn’t be allowed…

Dan Philpott February 19, 2008 11:49 AM

I think this is one of those rare occasions where Schneier jumped the gun. This isn’t the good worm idea he has written about in the past, where ‘good’ hackers write ‘good’ worms using ‘bad’ hacker techniques to introduce patches.

The paper is clearly about the efficacy of an epidemiological model of software distribution. I’d characterize it as a consideration of network theory overlayed on computer networks of a given configuration. It has application in worm containment, of course, but the paper is agnostic as to application. This could be used as a model for a new mechanism of software distribution quite unlike the more structured client-server or p2p styles of distribution.

The paper unfortunately suffers from a popular association with malware worms and the associated kneejerk responses that brings.

Larry D'Anna February 19, 2008 12:04 PM

A related, but far less idiotic idea would be to use some sort of peer-to-peer system to distribute patches. They should be signed, and only installed with user permission of course.

ZaD MoFo February 19, 2008 12:16 PM

I would like to know why there are so many differents “.com .org .net”, but no .upd (update) sub level name : a channel that would serve only to do this – system or software updates.

I imagine my computers establishing a communication channel with such an “anysoftwarecompany.upd” site, where each company (which has its portal maintenance already configured in my computers thru software install) is invited, when visited, to make the most recent update.

The idea that software update vectorized as a worm, and being able to browse my home (yes, my home) makes me very uncomfortable.

Jeremy February 19, 2008 1:41 PM

@ZaD MoFo:

It’s a wonderful idea in theory, but the problem is by investing trust in a domain extension, you invite it to be abused. If I buy a domain in that extension, I could easily use it to distribute malware disguised as a legitimate update, and a large number of non-technical users are going to bite on it. It could actually make it easier by listing a specific flaw that my malware supposedly “patches”, and the people most like to download the “patch” are those most susceptible to my malware.

Leo February 19, 2008 2:08 PM

I wonder what would happen if all this energy was put into getting the software right in the first place?

Aredhel February 19, 2008 2:34 PM

This has nothing to do with installing patches with out the consent of the owners of the computers, it is about getting the patch to the computers that want them as quickly.

Right now each computer asks “Are there any new patches yet?” every so often. The problem with that is that it takes up too much bandwidth at the central server.

A push model would work much better. Band with is only used when there is a new patch. For past push models you need to A)know the shape of the network ahead of time, and B) Trust the intermediet nodes.

The paper is says that if you send out the patch using the same model as viruses then it spreads all most as fast as the normal push model, but with having to trust all the nodes in the middle.

The primary messages would consist of a patch number, a time stamp, and a digital signature. After 5 minuets of spreading you now have a tree of 90% of the computers that are on line and want the patch, and it can be sent out in a single push cycle. The remaining 10% can get the patch by poling.

The whole thing can be sped up by having each computer remember which computers it told last time, and try to tell them first.

lattera February 19, 2008 3:35 PM

@ZaD MoFo,

What if DNS is compromised? You’ll be downloading from a malicious “update” server. Best to use the IP directly…

Dave C February 19, 2008 3:43 PM

What’s so magical about consent? Anyone who lives in a society with others has their individual rights tempered by the rights of the group. I have no more right to run a vulnerable (or infected) server connected to a public network than I do to set a fire and prevent the fire department from putting it out. Yes, this is “my fire” and anyone who puts it out violates my “consent.” I fail to see how patching security problems is any different from eliminating other public nuisances.

A reasonable fallback position from consent is due process. Maybe running an unpatched server isn’t an emergency and is more like having a pile of old tires in the front yard. The government can’t take away your old tires without citing you multiple times and eventually filing a lawsuit to have your yard declared a nuisance. Only after this due process can the government take away your garbage. However, the patching situation fails the choice of errors analysis because it is too much process for a relatively minor problem. Filing a lawsuit is reasonable for old tires only because 99.99% of the people don’t have old tires in their yard. If there are such a large number of users with unpatched servers, should every state hire a thousand civil service employees just to handle this? Probably not. Only a low-effort solution is reasonable given the severity of the problem individually. A solution is warranted because the harm in aggregate is serious. Maybe turning Internet connections off at the pipe? This fails the individual rights calculus because it seriously harms the (clueless) user for something that really isn’t that bad.

Involuntary patching seems like a reasonable balance of individual rights and the right of the group.

Cairnarvon February 19, 2008 4:57 PM

Ronald: “One is to clone it’s behavior and –probably– whitelisted signature”
Ronin: “Brilliant! After all, you’ll always be able to tell a legitimate, clean, safe update from a trojanned fake because … um … because, uh …”

It wouldn’t be whitelisted, and you wouldn’t need to be able to tell it from a fake. The idea is to patch users without any protection of their own, not users that already use virus protection.

It’s still a stupid idea, of course, but not for those reasons.

Paul February 19, 2008 6:16 PM

Dave C.

I once voluntarily let MS apply a patch to my computer. It took me two days to rebuild the system I had just finished putting together but had not yet backed up because I foolishly wanted all patches in place first. I make enough foolish mistakes of my own, I cannot afford to accommodate MS’s, or anyone else’s as well.

dave tweed February 19, 2008 7:44 PM

I kinda take the point that people correctly say the paper is about how to quickly distribute patches amongst machines and not about applying patches without consent, except… what’s the point in distributing the patch much more quickly if the time to actual application is still controlled by the user? If my machine is vulnerable, the fact I may happen to have a unapplied patch file lying on my hard disc at the time is neither here nor there.

I completely agree that the user should be able to postpone potentially risky patches at times where they can’t risk downtime. My only resolution to this is that the connection to the internet should be through very, very, very simple programmable devices (modem, router, etc) implementing simple filtering which can be updated by this strategy without any risk of a crash and thus limit DOS type attacks, etc. Any worm that just wants to infect and mess with your computer gets ignored by this level and handled by the regular AV updates, whenever the user decides to install them.

Of course, you can’t get there from where we are now 🙂 .

Brad Templeton February 19, 2008 11:57 PM

Well, what about a middle ground. Red Flag worms that simply discover vulnerable systems and trigger a warning to the user, possibly through an official OS API for triggering such warnings. “You’re compromised, go to your OS vendor and look for this patch.” So the user decides when and if to patch, they just get more urgency. And the red flag worm tells them that they are lucky it wasn’t a program that tried to do worse.

Now I imagine one could Phish with this message, but frankly, if you can infect a machine with running code, what need to you have to phish? The OS standard API would only direct people to official patches from the OS vendor, it would not give them random URLs. They could fake the dialog, but if they can do that, they can just download nasty code themselves, no?

So while a red flag worm still has problems in principle, it does counter most of the pragmatic objections rather than the principled ones.

Thomas February 20, 2008 1:08 AM

@Brad Templeton

If it doesn’t ‘infect’ the target computer and use it to propagate then it’s not a worm.

averros February 20, 2008 4:58 AM

@Dave C.

Involuntary patching seems like a
reasonable balance of individual rights
and the right of the group.

This, of course, assumes that a group is a being in its own right, and can have rights.

This belief is called “collectivism” and is the root of most evil in the modern history (particularly the history of 20th century).

Get rid of this belief. It is both irrational (because it is a trivial category error to think that collection is somehow inheriting the properties of a member) and hugely dangerous when applied in political discourse.

Hint: communism and fascism both have this belief as the centerpiece.

In practice anyone talking about rights of a group, benefits to society, etc, is either confused, or simply means his own interests.

Penguinista February 20, 2008 9:24 AM

I can just imagine a class action suit against MS on behalf of companies that are required for regulatory reasons to control the software running on certain servers. Then there will be the suits over DDOS attacks. This could keep some lawyers busy for quite a while.

Ian Griffiths February 20, 2008 10:00 AM

It’s not hugely surprising that almost nobody who posted a comment apparently bothered to read even the abstract of the article in question.

It’s considerably more disappointing that Bruce Schneier himself didn’t. Heck, even from the part of the New Scientist article that he quoted, I thought he had got the wrong end of the stick. Reading the research in question confirms it. This is not about ‘benevolent worms’. It’s about dissemination of information, and it looks at a couple of different applicable scenarios: one is how worms propagate, and another is how software may be distributed distributed.

If the example scenario had instead been peer to peer file transfer (which is just as applicable), would you have leapt to the conclusion that Microsoft researchers are aiming to boost piracy?

If you’re going to bother to post that you think something is a bad idea, you should at least take the time to verify that it’s the idea you think it is. It was the work of less than a minute in this case to discover that this work was not proposing the Bruce (rightly) rubbishes.

Lazy.

Anonymous February 20, 2008 11:47 AM

  1. Whether most people consider these worms “good” or “bad”, sooner or later they’ll appear (if they haven’t already). If there’s going to be an ecology of worms anyway, I don’t see that we’d be better off if these worms weren’t part of it.

  2. One possibility I’ve discussed with friends is the caged tiger approach. I set up some code on my machine that will, of itself, do nothing. But it’s wired to respond to an attack against a specific vulnerability (or maybe a specific known worm). If your machine attacks mine (i.e. if a worm tries to jump form your machine to mine) then the code executes, invades your machine and kills the worm. In some variations it might a) patch the hole in your machine, b) leave a message on your screen, c) set itself up on your machine to await the next attack. I don’t know whether this is practical, but it has the advantage that it does nothing to machines that haven’t already been infected (and become attackers).

Ronald van den Heetkamp February 20, 2008 7:39 PM

Let’s use history as a lesson:

quote:

Attempts at creating ‘good’ worms have failed, many times because the writers did not adopt the safeguards outlined in the Bontchev paper. In 1982, prior to Bontchev’s work, two Xerox Palo Alto Research Center (PARC) researchers John Shoch and Jon Hupp coined the term ‘worm’ for a program that spread around their 100-computer network updating drivers. A flipped bit in the program caused the resulting worm to spread uncontrollably and clog the network.

/unquote

Kanly February 22, 2008 1:40 AM

I once voluntarily let MS apply a patch to my computer.

Amen! Ideally we should be able to trust the vendor, but Microsoft has broken that trust time and time again. There’s so much corporate spyware and everything from Apple (Bonjour) to Adobe are constantly phoning home and downloading God knows what. Security updates are one thing, but pushing new versions down your pipe is too much. If the software on my machine works, damn well leave it alone Microsoft!

We now keep our development machines completely separate. Behind a firewall router you’re reasonably safe so disabling updates is an option.

Nick FitzGerald February 23, 2008 6:03 PM

@Brad

“Now I imagine one could Phish with this message, but frankly, if you can infect a machine with running code, what need to you have to phish?”

Ummmm — there would be no need for a phisher to run code to display the message. A suitable facsimilie of the standard API’s on-screen message display, perhaps in a phishing scam Email, or in a popup on a web page, would be more than enough to catch the low-hanging fruit…

“The OS standard API would only direct people to official patches from the OS vendor, it would not give them random URLs. They could fake the dialog, but if they can do that, they can just download nasty code themselves, no?”

Yes, BUT they don’t have to fake the dialog directly in the client OS. Far too many users are far too computer illiterate to differentiate between a speech bubble rising from the Windows system tray and a browser popup (or even a web page), or between a “security padlock” in their browser status bar and randomly on the web page they’re looking at (or even as the favicon!).

Security “improvements” that almost cannot help those most in need of them, and that mainly only help those who wouldn’t have fallen for the spoof anyway, aren’t really much of an improvement…

Pieter March 15, 2008 4:32 AM

Maybe a peer to peer system of making updates already installed available to any close by computers is more viable.

With some digital signatures, it could speedup patch delivery/download and bandwidth costs.

PC just request to neighbors if they have blobxyz with signature abc available.

Stuart Gathman December 1, 2008 11:10 PM

All the points address general updating via “white” worms. If a white worm only works on systems with a critical already explointed vulnerability, then it can’t make things any worse. Admins who keep their system updated would not be affected. Only unpatched systems would run the risk of unintended side effects (and the “black” worm wide effects would be worse).

White worm patched systems would/should not be trusted anymore than black worm patched systems. Trusted updates would still be distributed the standard way.

Something similar already happens with black worms. A botnet worm will patch security holes to prevent rival botnets from seizing control.

obalin March 3, 2009 4:27 PM

this is reactionary drivel.

let a worm take advantage of a recently discovered security flaw to notify the user the flaw is there and that a patch exists, while checking the network for other machines with the same hole. afterwards, clear itself out. if the flaw exists, it will be exploited by someone. it is best that person is us.

Amber March 28, 2013 2:43 PM

Thanks for this article. You raised a few points that I hadn’t considered. I wrote a brief paper on this for an assignment in my Ethics in Technology course and am now presenting a brief discussion on why even “good” worms are unethical according to Kantian and Utilitarian ethics. Your perspective has rounded out a few of my points and I will be sure to include you as a source.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.