August 15, 2006
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0608.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Last Week’s Terrorism Arrests
- Remote-Control Airplane Software
- Crypto-Gram Reprints
- Doping in Professional Sports
- iPod Thefts
- Security Certifications
- The Doghouse: Sniper Flash Cards
- A Month of Browser Bugs
- HSBC Insecurity Hype
- Counterpane News
- Updating the Traditional Security Model
- Bot Networks
- Comments from Readers
Last Week’s Terrorism Arrests
Hours-long waits in the security line. Ridiculous prohibitions on what you can carry on board. Last week’s foiling of a major terrorist plot and the subsequent airport security changes graphically illustrates the difference between effective security and security theater.
None of the airplane security measures implemented because of 9/11—no-fly lists, secondary screening, prohibitions against pocket knives and corkscrews—had anything to do with last week’s arrests. And they wouldn’t have prevented the planned attacks, had the terrorists not been arrested. A national ID card wouldn’t have made a difference, either.
Instead, the arrests are a victory for old-fashioned intelligence and investigation. Details are still secret, but police in at least two countries were watching the terrorists for a long time. They followed leads, figured out who was talking to whom, and slowly pieced together both the network and the plot.
The new airplane security measures focus on that plot, because authorities believe they have not captured everyone involved. It’s reasonable to assume that a few lone plotters, knowing their compatriots are in jail and fearing their own arrest, would try to finish the job on their own. The authorities are not being public with the details—much of the “explosive liquid” story doesn’t hang together—but the excessive security measures seem prudent.
But only temporarily. Banning box cutters since 9/11, or taking off our shoes since Richard Reid, has not made us any safer. And a long-term prohibition against liquid carry-on items won’t make us safer, either. It’s not just that there are ways around the rules, it’s that focusing on tactics is a losing proposition.
It’s easy to defend against what terrorists planned last time, but it’s shortsighted. If we spend billions fielding liquid-analysis machines in airports and the terrorists use solid explosives, we’ve wasted our money. If they target shopping malls, we’ve wasted our money. Focusing on tactics simply forces the terrorists to make a minor modification in their plans. There are too many targets—stadiums, schools, theaters, churches, the long line of densely packed people in front of airport security—and too many ways to kill people.
Security measures that attempt to guess correctly don’t work, because invariably we will guess wrong. It’s not security, it’s security theater: measures designed to make us feel safer but not actually safer.
Airport security is the last line of defense, and not a very good one at that. Sure, it’ll catch the sloppy and the stupid—and that’s a good enough reason not to do away with it entirely—but it won’t catch a well-planned plot. We can’t keep weapons out of prisons; we can’t possibly keep them off airplanes.
The goal of a terrorist is to cause terror. Last week’s arrests demonstrate how real security doesn’t focus on possible terrorist tactics, but on the terrorists themselves. It’s a victory for intelligence and investigation, and a dramatic demonstration of how investments in these areas pay off.
And what can you do to help? Don’t be terrorized. They terrorize more of us if they kill some of us, but the dead are beside the point. If we give in to fear, the terrorists achieve their goal even if they are arrested. If we refuse to be terrorized, then they lose—even if their attacks succeed.
New airline security rules:
Getting inside the terrorists’ heads (funny cartoon):
The DHS declares an entire state of matter a security risk:
And here’s a good commentary on being scared:
A version of this article originally appeared in the Minneapolis Star Tribune:
Remote-Control Airplane Software
Does anyone other than me see a problem with this?
“Some 30 European businesses and research institutes are working to create software that would make it possible from a distance to regain control of an aircraft from hijackers, according to the German news magazine.
“The system ‘which could only be controlled from the ground would conduct the aircraft posing a problem to the nearest airport whether it liked it or not,’ according to extracts from next Monday’s Der Spiegel released Saturday.
“‘A hijacker would have no chance of reaching his goal, ‘ it said.”
Unless his goal were, um, hijacking the aircraft.
It seems to me that by designing remote-control software for airplanes, you open the possibility for someone to hijack the plane without even being on board. Sure, there are going to be computer-security controls protecting this thing, but we all know how well that sort of thing has worked in the past.
“The system would be designed in such a way that even a computer hacker on board could not get round it.”
But what about computer hackers on the ground?
I’m not saying this is a bad idea; it might be a good idea. But this security countermeasure opens up an entirely new vulnerability, and I hope that someone is studying that new vulnerability.
Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.
Cisco and ISS Harass Security Researcher:
Plagiarism and Academia: Personal Experience
Bob on Board:
Alibis and the Kindness of Strangers:
Houston Airport Rangers:
Websites, Passwords, and Consumers:
Flying on Someone Else’s Airplane Ticket:
Hidden Text in Computer Documents:
Palladium and the TCPA:
Arming Airplane Pilots:
Protecting Copyright in the Digital World:
Vulnerabilities, Publicity, and Virus-Based Fixes:
A Hardware DES Cracker:
Biometrics: Truths and Fictions:
Back Orifice 2000:
Web-Based Encrypted E-Mail:
Doping in Professional Sports
The big news in professional bicycle racing is that Floyd Landis has been stripped of his Tour de France title because he tested positive for a banned performance-enhancing drug. Sidestepping the entire issue of whether professional athletes should be allowed to take performance-enhancing drugs, how dangerous those drugs are, and what constitutes a performance-enhancing drug in the first place, I’d like to talk about the security and economic issues surrounding the issue of doping in professional sports.
Drug testing is a security issue. Various sports federations around the world do their best to detect illegal doping, and players do their best to evade the tests. It’s a classic security arms race: improvements in detection technologies lead to improvements in drug detection evasion, which in turn spur the development of better detection capabilities. Right now, it seems that the drugs are winning; in places, these drug tests are described as “intelligence tests”: if you can’t get around them, you don’t deserve to play.
But unlike many security arms races, the detectors have the ability to look into the past. Last year, a laboratory tested Lance Armstrong’s urine and found traces of the banned substance EPO. What’s interesting is that the urine sample tested wasn’t from 2005; it was from 1999. Back then, there weren’t any good tests for EVO in urine. Today there are, and the lab took a frozen urine sample—who knew that labs save urine samples from athletes?—and tested it. He was later cleared—the lab procedures were sloppy—but I don’t think the real ramifications of the episode were ever well understood. Testing can go back in time.
This has two major effects. One, doctors who develop new performance-enhancing drugs may know exactly what sorts of tests the anti-doping laboratories are going to run, and they can test their ability to evade drug detection beforehand. But they cannot know what sorts of tests will be developed in the future, and athletes cannot assume that just because a drug is undetectable today it will remain so years later.
Two, athletes accused of doping based on years-old urine samples have no way of defending themselves. They can’t resubmit to testing; it’s too late. If I were an athlete worried about these accusations, I would deposit my urine “in escrow” on a regular basis to give me some ability to contest an accusation.
The doping arms race will continue because of the incentives. It’s a classic Prisoner’s Dilemma. Consider two competing athletes: Alice and Bob. Both Alice and Bob have to individually decide if they are going to take drugs or not.
Imagine Alice evaluating her two options:
“If Bob doesn’t take any drugs,” she thinks, “then it will be in my best interest to take them. They will give me a performance edge against Bob. I have a better chance of winning.
“Similarly, if Bob takes drugs, it’s also in my interest to agree to take them. At least that way Bob won’t have an advantage over me.
“So even though I have no control over what Bob chooses to do, taking drugs gives me the better outcome, regardless of what his action.”
Unfortunately, Bob goes through exactly the same analysis. As a result, they both take performance-enhancing drugs and neither has the advantage over the other. If they could just trust each other, they could refrain from taking the drugs and maintain the same non-advantage status—without any legal or physical danger. But competing athletes can’t trust each other, and everyone feels he has to dope—and continues to search out newer and more undetectable drugs—in order to compete. And the arms race continues.
Some sports are more vigilant about drug detection than others. European bicycle racing is particularly vigilant; so are the Olympics. American professional sports are far more lenient, often trying to give the appearance of vigilance while still allowing athletes to use performance-enhancing drugs. They know that their fans want to see beefy linebackers, powerful sluggers, and lightning-fast sprinters. So, with a wink and a nod, they only test for the easy stuff.
For example, look at baseball’s current debate on human growth hormone: HGH. They have serious tests, and penalties, for steroid use, but everyone knows that players are now taking HGH because there is no urine test for it. There’s a blood test in development, but it’s still some time away from working. The way to stop HGH use is to take blood tests now and store them for future testing, but the players’ union has refused to allow it and the baseball commissioner isn’t pushing it.
In the end, doping is all about economics. Athletes will continue to dope because the Prisoner’s Dilemma forces them to do so. Sports authorities will either improve their detection capabilities or continue to pretend to do so—depending on their fans and their revenues. And as technology continues to improve, professional athletes will become more like deliberately designed racing cars.
Baseball and HGH:
This essay originally appeared on Wired.com.
What happens if you distribute 50 million small, valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course. Here’s the data:
“‘Rise in crime blamed on iPods’, yells the front page of London’s Metro. ‘Muggers targeting iPod users, ‘ says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of ‘young people carrying expensive goods, such as mobile phones and MP3 players. ‘ A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.”
This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.
What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.
The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.
There’s a French national scandal with a bank hack at the center.
Symantec is reporting a zero-day PowerPoint exploit. Right now, the threat assessment is low, but that could change overnight if someone writes an automatic worm that takes advantage of this vulnerability. Note that the vulnerability appeared in the wild a few days after “Patch Tuesday,” presumably to maximize the window of exposure before Microsoft issues a patch.
The list of top terrorist targets from the Department of Homeland Security is seriously dumb. It includes 1,305 casinos, 234 restaurants, an ice cream parlor, a tackle shop, a flea market, and an Amish popcorn factory—3,650 sites total. What’s going on? Pork-barrel politics is what’s going on. We’re never going to get security right if we continue to make it a parody of itself.
Fake IDs save lives in Iraq:
By January 1st, 2007, everyone crossing the border between the U.S. and Canada is supposed to have a passport. This is because of terrorism, of course. But now we learn that ferries and private watercraft will be exempt. One of two things is true. Either passports are required for security, in which case we should interfere with ferries. Or they’re for show, in which case we can just do what’s convenient. Or maybe we just know that terrorists never take ferries. I get that security is a trade-off, but this is kind of silly.
ABN AMRO has introduced voice authentication in its telephone banking system. This seems like a good idea, assuming it’s reliable.
Firefox 2.0 to contain anti-phishing features:
Someone hacked the computers that served ads to, among other sites, MySpace. A million computers were infected as a result.
Nepenthes: a malware collection tool and a good idea for a research project:
This seems like a really clever use of RFID. The idea is to embed chips in surgical equipment, and then wave a detector over surgical patients to make sure the doctors didn’t accidentally leave something inside the body. As long as the automatic system augments the manual system currently being used, and doesn’t replace it, I’m in favor.
Sky marshals must report on innocent people to meet a quota:
Problems of reporting from the Lebanon war zone:
CIA agents have been exposed due to their use of frequent flier miles and other mistakes. I’m not sure how collecting frequent flier miles is a problem, though. Assuming they’re traveling under the cover of being business executives, it makes sense for them to act just like other business executives. It’s not like there’s no other way to reconstruct their travel.
In Beyond Fear, I wrote about profiling. I talked a lot about how smart behavioral-based profiling is much more effective than dumb characteristic-based profiling, and how well-trained people are much better than computers. The story I used was about how U.S. customs agent Diana Dean caught Ahmed Ressam in 1999. In this story, an alert customs official noticed an English football shirt on a Senegalese man trying to enter Cyprus on a forged French passport. This led him to check the passport a little more closely, and then he noticed the forgery. That’s just not the kind of thing you’re going to get a computer to pick up on, at least not until artificial intelligence actually produces a working brain.
My writing on profiling:
Memoirs of an airport security screener:
The person is writing about working as a screener years before 9/11, before the TSA, so hopefully things are different now. It’s a pretty fascinating read, though. Two things pop out at me. One, as I wrote, it’s a mind-numbingly boring task. And two, the screeners were trained, not to find weapons, but to find the particular example weapons that the FAA would test them on.
In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA). Basically, this is the law that forces the phone companies to make your telephone calls—including cell phone calls—available for government wiretapping. But now the government wants access to VoIP calls, and SMS messages, and everything else. They’re doing their best to interpret CALEA as broadly as possible, but they’re also pursuing a legal angle.
ScatterChat is a secure instant messaging client that uses the Tor anonymous communication system.
There are flaws in the protocol, though.
Interesting research on security and monoculture:
The top three antivirus programs—from Symantec, McAfee, and Trend Micro—are less likely to detect new viruses and worms than less popular programs, because virus writers specifically test their work against those programs. It’s interesting to watch the landscape change, as malware becomes less the province of hackers and more the province of criminals. This is one move in a continuous arms race between attacker and defender.
This computerized servomotor opens combination locks by brute forcing all the combinations. This isn’t particularly surprising, but it is nice to see some actually build one.
Here’s a description of how to open a common Master brand lock in about 10 minutes. The design makes the 40^3 possible combinations collapse to 121. It’s a physical metaphor for bad cryptography.
Taking a cue from a useless American idea, the UK has announced a system of threat levels:
I wrote about the stupidity of this sort of system back in 2004:
The Bush administration used this system largely as a political tool. Perhaps Tony Blair has the same idea.
Anti-missile defenses for passenger aircraft aren’t happening anytime soon:
Probably for the best, actually. One, there are far more effective ways to spend that money on counterterrorism. And two, they’re only effective against a particular type of missile technology.
Hackers clone RFID passports:
What do you do when you find someone else stealing bandwidth from your wireless network? I don’t care, but this person does. So he “runs squid with a trivial redirector that downloads images, uses mogrify to turn them upside down and serves them out of it’s[sic] local webserver.” The images are hysterical. He also tries modifying all the images so they are blurry.
Open Voting Foundation releases information about huge Diebold voting machine flaw:
One bank has banned cell phones as a security measure. This is just plain dumb. It’s easy to get around the ban: a Bluetooth earpiece is inconspicuous enough. Or a couple of earbuds that look like an iPod. Or an SMS device. It only has to work at the beginning. After all, once you start actually robbing the bank, a ban isn’t going to deter you from using your cell phone.
Nice article about data mining and terrorism.
At BlackHat last month, Brendan O’Connor warned about the dangers of insecure printers: treat them as computers, not as printers. I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.
Great article from CATO on the risks of terrorism:
Here’s a comprehensive database of malware: cost is 13,500 euros per year.
The hacker group Cult of the Dead Cow also has a malware repository, free and with looser access restrictions.
AOL releases a massive amount of search data. This is search data for roughly 658,000 anonymized users over a three month period from March to May—about 1/3 of 1 per cent of their total data for that period.
Amnesty International launches a campaign against Internet repression:
Seems that a group of Sri Lankan credit card thieves collected the data off a bunch of UK chip-protected credit cards. They couldn’t clone the chips, so they took the information off the magnetic stripe and made non-chip cards. These cards wouldn’t work in the UK, of course, so the criminals flew down to India where the ATMs only verify the magnetic stripe. Backwards compatibility is often incompatible with security. This is a good example, and demonstrates how criminals can make use of “technological arbitrage” to leverage compatibility.
Here’s a collection of 11 prison shivs confiscated over 20 years ago in New Jersey. Think about these, and the adverse conditions they were made under, the next time you see someone’s pocket knife being taken away from him at airport security.
About a quarter of the way down on this page, you’ll find a scan of a 1970s Superman comic in which a hacker kid breaks into the Fortress of Solitude’s computer system, using what looks to be a TRS-80 Model III. Superman’s password was “Kal-El”: his Kryptonian name.
Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.
Remember: Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.
Here’s a sophisticated credit card fraud ring that intercepted credit card authorization calls in Phuket, Thailand. It’s 2006 and those merchant terminals still don’t encrypt their communications?
Department of Homeland Security, Office of the Inspector General, “Enhanced Security Controls Needed For US-VISIT’s System Using RFID Technology (Redacted),” OIG-06-39, June 2006.
Department of Homeland Security, Office of the Inspector General, “Review of CBP Actions Taken to Intercept Suspected Terrorists at U.S. Ports of Entry,” OIG-06-43, June 2006.
I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.
What’s changed? Both the job requirements and the certification programs.
Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.
That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.
But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.
And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.
At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.
Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.
In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.
Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.
This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)
A Guide to Information Security Certifications:
The Doghouse: Sniper Flash Cards
They have a cryptanalysis contest with a $5,000 prize, but a $100 entry fee. Sounds like a scam to me.
My comments on cracking contests:
A Month of Browser Bugs
To kick off his new Browser Fun blog, H.D. Moore began with “A Month of Browser Bugs.” Thirty-one days, and thirty-one hacks later, the blog lists exploits against all the major browsers:
Internet Explorer: 25
My guess is that he could have gone on for another month without any problem, and possibly could produce a new browser bug a day indefinitely.
The moral here isn’t that IE is less secure than the other browsers, although I certainly believe that. The moral is that coding standards are so bad that security flaws are this common.
Eric Rescorla’s theory of bug finding:
HSBC Insecurity Hype
The Guardian has the story:
“One of Britain’s biggest high street banks has left millions of online bank accounts exposed to potential fraud because of a glaring security loophole, the Guardian has learned.
“The defect in HSBC’s online banking system means that 3.1 million UK customers registered to use the service have been vulnerable to attack for at least two years. One computing expert called the lapse ‘scandalous.’
“The discovery was made by a group of researchers at Cardiff University, who found that anyone exploiting the flaw was guaranteed to be able to break into any account within nine attempts.”
Sounds pretty bad.
But look at this:
“The flaw, which is not being detailed by the Guardian, revolves around the way HSBC customers access their web-based banking service. Criminals using so-called ‘keyloggers’—readily available gadgets or viruses which record every keystroke made on a target computer—can easily deduce the data needed to gain unfettered access to accounts in just a few attempts.”
So, the “scandalous” flaw is that an attacker *who already has a keylogger installed on someone’s computer* can break into his HSBC account. Seems to me if an attacker has a keylogger installed on someone’s computer, then he’s got all sorts of security issues.
If this is the biggest flaw in HSBC’s login authentication system, I think they’re doing pretty good.
Transcripts of the Counterpane Customer Panel at the Gartner show earlier this year are now available:
Minnesota Public Radio interviewed me while wandering around Minneapolis, looking for cameras and other forms of mass surveillance.
Updating the Traditional Security Model
On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:
Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)
“This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend ‘admissibility’ to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.”
He’s 100% right.
What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.
All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.
The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.
The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.
Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.
These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!
Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.
Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.
And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)
Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.
One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.
There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.
As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.
This essay originally appeared on Wired.com:
1.5-million-node bot network:
Comments from Readers
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to firstname.lastname@example.org. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.
Copyright (c) 2006 by Bruce Schneier.