Why Framing Your Enemies Is Now Virtually Child's Play
In the eternal arms race between bad guys and those who police them, automated systems can have perverse effects
A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists demonstrated that it's possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn't even necessary to fabricate. In Charlie Stross's novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.
This kind of thing has been going on for ever. It's an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.
Google, for example, has anti-fraud systems that detect -- and shut down -- advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.
Similarly, when Google started penalizing a site's search engine rankings for having "bad neighbors" -- backlinks from link farms, adult or gambling sites, or blog spam -- people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors' sites.
The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.
Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I'm sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?
Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard -- leaving someone else's fingerprints on a crime scene is hard, as is using a mask of someone else's face to fool a guard watching a security camera -- and sometimes it's easy. But when automated systems are involved, it's often very easy. It's not just hardened criminals that try to frame each other, it's mainstream commercial interests.
With systems that police internet comments and links, there's money involved in commercial messages -- so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there's no end, really. Commercial speech is on the internet to stay; we can only hope that they don't pollute the social systems we use so badly that they're no longer useful.